This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Documentation

1 - PlaidCloud

Here you will find documentation on using the core aspects of PlaidCloud including data management (Analyze), data visualization (Dashboards), and document management, as well as the expression library.

1.1 - Analyze

The Analyze tools consist of the development of projects to manage a set of functions and data objects that serve a business purpose. A Project is used exclusively in data management and does not include the display of data through a dashboard. To access the Analyze functionality in PlaidCloud, click on the 3 gear icon/Analyze in the left menu.

1.1.1 - Projects

A Project is a place in PlaidCloud to manage a set of functions and data objects that serve a business purpose. For example, a Project could be BOM_Build, which is a set of workflows, tables, data imports, and so on that all work together to build the Bill of Materials. A Project is used exclusively in data management and does not include the display of data through a dashboard.

1.1.1.1 - Viewing Projects

Viewing authorized projects

Description

Within Analyze, the Projects function provides a level of compartmentalization that makes controlling access and modifying privileges much easier. Projects are what provide the primary segregation of data within a workspace tab.

While Projects fall under Analyze, workflows fall under Projects, meaning that Projects contain workflows. Workflows, simply put, perform a wide range of tasks including data transformation pipelines, data analysis, and even ETL processes. More information on workflows can be found under the “Workflows” section.

Accessing Projects

To access Projects:

  1. Open Analyze
  2. Select “Projects” from the top menu bar

This displays the Projects Hierarchy. From here, you will see a hierarchy of projects for which you have access. There may be additional projects within the workspace, but, if you are not an owner or assigned to the project, they will not be visible to you.

1.1.1.2 - Managing Projects

Create and Manage new projects

Searching

Searching for projects is accomplished by using the filter box in the lower left of the hierarchy. The search filter will search project names and labels for matches and show the results in the hierarchy above.

Creating New Projects

To create a new project:

  1. Open Analyze
  2. Select “Projects” from the top menu bar
  3. Click the “New Project” button
  4. Complete the form information including the “Access Control” section
  5. Click “Create”

The project is now ready for updating access permissions, adding owners, and creating workflows.

Automatic Change Tracking

All changes to a project, including workflows, data editors, hierarchies, table structures, and UDFs are tracked and allow point-in-time recovery of the state. This allows for easy recovery from user introduced problems or simply copying a different point-in-time to another project for comparison.

In addition to overall tracking, projects and their elements also allow for versioning. Not only is creating a version easy, you can also merge changes from one version to another. This provides a simple way to keep track of snapshots or to create a version for development and then be able to merge those changes into the non-development version when you want.

Managing Project Access

Types of Access

Project security has been simplified into three types of access:

  • All Workspace Members
  • Specific Members Only
  • Specific Security Groups Only

Setting the project security is easy to do:

  1. Open Analyze
  2. Select “Projects”
  3. Click the edit icon of the project you want to restrict
  4. Choose desired restriction under “Access Control”
  5. Click “Update”

All Workspace Members

“All Workspace Members” access is the most simple option since it provides access to all members of the workspace and does not require any additional assignment of members.

Specific Members Only

“The Specific Members Only” access setting requires assignment of each member to the project.To assign members to a project:

  1. Open Analyze
  2. Select “Projects” from the top menu bar
  3. Click the members icon
  4. Grant access to members by selecting the check box next to their name in the “Access” column
  5. Click “Update”

For clouds with large numbers of members, this approach can often require more effort than desired, which is where security groups become useful.

Specific Security Groups Only

The “Specific Security Groups Only” option enables assigning specific security groups permission to access the account. With access restrictions relying on association with a security group or groups, the administration of account access for larger groups is much simpler. This is particularly useful when combined with single sign-on automatic group association. By using single sign-on to set member group assignments, these groups can also enable and disable access to projects implicitly.

To edit assigned groups:

  1. Open Analyze
  2. Select “Projects” from the top menu bar
  3. Click the security groups icon
  4. Grant access to security groups by selecting the check box next to their name in the “Access” column
  5. Click “Update”

Setting Different Viewing Roles

Many times a project may require several transformations and tables to complete intermediate steps while the end result may end up only consisting of a few tables. Members do not always require viewing of all the elements of the project, sometimes just the final product. PlaidCloud offers you the ability to set different viewing roles to easily declutter and control the visibility of each member.

There are three built-in viewing roles: Architect, Manager, and Explorer

The Architect role is the most simple because it allows full visibility and control of projects, workflows, tables, variables, data editors, hierarchies, and user defined functions.

The Manager and Explorer roles have no specific access privileges but can be custom-defined. In other words, you can choose which items are visible to each group.

You can make everyone an Architect if you feel visibility of everything within the project is needed; otherwise, you can designate members as Manager and/or Explorer project members and control visibility that way.

To set the different role:

  1. Open Analyze
  2. Select “Projects”
  3. Click the members icon
  4. Select the member you whose role you would like to change
  5. Double click their current role in the “Role” column
  6. Select the desired role
  7. Click “Update”

Managing Project Variables

When running a project or workflow it may be useful to set variables for recurring tasks in order to decrease clutter and save time. These variables operate just like a normal algebraic variable by allowing you to set what the variable represents and what operation should follow it. PlaidCloud allows you to set these variables at the project level, which will effect all the workflows within that project, or at the workflow level, which will only effect that specific workflow.

To set a project level variable:

  1. Open Analyze
  2. Select “Projects”
  3. Click the Manage Project Variables icon

From the Variables Table you can view the variables and view/edit the current values. You can also add new or delete existing variables by clicking the “New Project Variable” button.

Cloning a Project

When a project is cloned, there may be project related references, such as workflow steps, that run within the project. PlaidCloud offers two options for performing a full duplication:

  • Duplicate with updating project references
  • Duplicate without updating project references

Duplicating with updating project references means all the related references point to the newly duplicated project.

To duplicate with updating project references:

  1. Open Analyze
  2. Select “Projects”
  3. Select the project you would like to duplicate
  4. Click the “Actions” button
  5. Select the “Duplicate with project reference updates” option

To duplicate without updating project references means to have all of the related references continue pointing to the original project.

To duplicate without updating project references:

  1. Open Analyze
  2. Select “Projects”
  3. Select the project you would like to duplicate
  4. Click the “Actions” button
  5. Select the “Duplicate without project reference updates” option

Viewing the Project Report

When a project or workflow is dynamic, maintaining detailed documentation becomes a challenge. To help solve this problem, PlaidCloud provides the ability to generate a project-level report that gives detailed documentation of workflows, workflow steps, user defined transforms, variables, and tables. This report is generated on-demand and reflects the current state of the project.

To download the report:

  1. Open Analyze
  2. Select “Projects”
  3. Click the report icon

1.1.1.3 - Managing Tables and Views

Organize and manage your tables and views

PlaidCloud offers the ability to organize and manage tables, including labels. Tables are available to all workflows within a project and have many tools and options.

In addition to tables, PlaidCloud also offers Views based on table data. Using Views allows for instant updates when underlying table changes occur, as well as saving data storage space.

Options include:

  • The same table can exist on multiple paths in the hierarchy (alternate hierarchies)
  • Tables are taggable for easier search and inclusion in PlaidCloud processes
  • Tables can be versioned
  • Tables can be published so they are available for Dashboard Visualizations

PlaidCloud uses a path-based system to organize tables, like you would use to navigate a series of folders, allowing for a more flexible and logical organization of tables. Using this system, tables can be moved within a hierarchy, or multiple references to one table from different locations in the hierarchy (alternate hierarchies), can be created. The ability to manage tables using this method allows the structure to reflect operational needs, reporting, and control.

Searching

Searching for tables is accomplished by using the filter box in the lower left of hierarchy. The search filter will search table names and labels for matches and show the results in the hierarchy above.

Move

To move a table:

  1. Drag it into the folder where you wish it to be located

Rename

To rename a table:

  1. Right click on the table
  2. Select the rename option
  3. Type in the new name and save it
  4. The table is now renamed, but it retains its original unique identifier.

Clear

To clear a table:

  1. Select the tables in the hierarchy ‘
  2. Click the clear button on the top toolbar.

Note: You can clear a single table or multiple tables

Delete

To delete a table:

  1. Select the tables in the hierarchy
  2. Click the delete button on the top toolbar
  3. The deleted operation will check to see if the table is in use by workflow steps or Views. If so, you will be asked to remove those associations before deletion can occur.

Note: You can also force delete the table(s). Force deletion of the table(s) will leave references broken, so this should be used sparingly.

Create New Directory Structure

To add a new folder:

  1. Click the New Folder button on the toolbar

To add a folder to an existing folder:

  1. Right-click on the folder
  2. Select New Folder

View Data (Table Explorer)

Table data is viewed using the Data Explorer. The Data Explorer provides a grid view of the data as well as a column by column summary of values and statistics. Point-and-click filtering and exporting to familiar file formats are both available. The filter selections can also be saved as an Extract step usable in a workflow.

Publish Table for Reporting

Dashboard Visualizations are purposely limited to tables that have been published. When publishing a table, you can provide a unique name that may distinguish the data. This may be useful when the table has a more obscure name on part of the workflow that generated it, but it needs a clearer name for those building dashboards.

Published tables do not have paths associated with them. They will appear as a list of tables for use in the dashboards area.

Mark Table for Viewing Roles

The viewing of tables by various roles can be controlled by clicking the Explorer or Manager checkboxes. If multiple tables need to be updated, select the tables in the hierarchy and select the desired viewing role from the Actions menu on the top toolbar.

Memos to Describe Table Contents

Add a memo to a table to help understand the data.

View Table Shape, Size, and Last Updated Time

The number of rows, columns, and the data size for each table is shown in the table hierarchy. For very large tables (multi-million rows) the row count may be estimated and an indicator for approximate row count will be shown.

View Additional Table Attributes

To view and edit other table attributes:

  1. Select a table
  2. Click the view the table context form on the right.

Duplicate a Table

To duplicate a table:

  1. Selecting the table
  2. Click on the duplicate button on the top toolbar.

1.1.1.4 - Managing Hierarchies

Create and organize your own workflow hierarchies

PlaidCloud offers the ability to organize and manage hierarchies, including labels. Hierarchies are available to all workflows within a project.

PlaidCloud uses a path-based system to organize hierarchies, like you would use to navigate a series of folders, allowing for a more flexible and logical organization (control hierarchy) of the hierarchies. Using this system, hierarchies can be moved within a control hierarchy, or multiple references to one hierarchy, from different locations in the control hierarchy (alternate hierarchies) can be created. The ability to manage hierarchies using this method allows the structure to reflect operational needs, reporting, and control.

Searching

To search for hierarchies:

  1. Use the filter box in the lower left of the control hierarchy
  2. The search filter will search hierarchy names and labels for matches and show the results in the control hierarchy above

Move

To move a hierarchy within the control hierarchy:

  1. Drag it into the folder where you wish to place it

Rename

To Rename a Hierarchy:

  1. Right click on the hierarchy
  2. Select the rename option
  3. Type in the new name and save it
  4. The hierarchy is now renamed, but it will retain its original unique identifier

Clear

You can clear a single hierarchy or multiple hierarchies.

To clear a hierarchy:

  1. Select the hierarchies in the control hierarchy
  2. Click the clear button on the top toolbar

Delete

You can delete a single hierarchy or multiple hierarchies.

To delete a hierarchy:

  1. Select the hierarchies in the control hierarchy
  2. Click the delete button on the top toolbar

The delete operation will check to see if the hierarchy is in use by workflow steps, tables, or views. If so, you will be asked to remove those associations.

Create New Directory Structure

To create a new folder:

  1. Clicking the New Folder button on the toolbar

To add a folder to an existing folder:

  1. Right-click on the folder
  2. Select New Folder.

Mark Hierarchy for Viewing Roles

To view hierarchies by roles:

  1. Click in the Explorer or Manager checkboxes

To view hierarchies that need to be updated:

  1. Select the hierarchies in the control hierarchy
  2. Select the desired viewing role from the Actions menu on the top toolbar

Memos to Describe Table Contents

To add a memo to a hierarchy:

  1. Select the hierarchy
  2. Update the memo in the right context form

View Additional Hierarchy Attributes

To view and edit additional hierarchy attributes:

  1. Select a hierarchy
  2. View the hierarchy context form on the right

Duplicate a Hierarchy

To duplicate a hierarchy:

  1. Select the hieracrhy

  2. Click the duplicate button on the top toolbar

1.1.1.5 - Managing Data Editors

Create and Edit table data though user interaction

PlaidCloud offers the ability to organize and manage data editors, including labels. Data Editors allow editing table data or creating data by user interaction.

PlaidCloud uses a path-based system to organize data editors, like you would use to navigate a series of folders, allowing for a more flexible and logical organization (control hierarchy) of the data editors. Using this system, data editors can move within a control hierarchy. Multiple references to one data editor from different locations in the control hierarchy (alternate hierarchies) can be created. The ability to manage data editors using this method allows the structure to reflect operational needs, reporting, and control.

Searching

To search for data editors:

  1. Use the filter box in the lower left of the control hierarchy

The search filter will search data editors’ names and labels for matches and show the results in the control hierarchy above.

Move

To move a data editor within the control hierarchy:

  1. Drag it into the folder where you wish to place it

Rename

To rename a data editor:

  1. Right click on the data editor
  2. Select the rename option
  3. Type in the new name and save it

The data editor will now be renamed but retain its original unique identifier.

Delete

You can delete a single data editor or multiple data editors.

To delete a data editor:

  1. Select the data editors in the control hierarchy
  2. Click the delete button on the top toolbar

Create New Directory Structure

To add a new folder to the control hierarchy:

  1. Click the New Folder button on the toolbar

To add a folder to an existing folder:

  1. Right-click on the folder
  2. Select New Folder

Mark Hierarchy for Viewing Roles

The viewing of data editors by various roles:

  1. Click in the Explorer or Manager checkboxes

To update multiple data editors:

  1. Select the data editors in the control hierarchy
  2. Select the desired viewing role from the Actions menu on the top toolbar

Memos to Describe Table Contents

To add a memo to a data editor:

  1. Select the data editor
  2. Update the memo in the right context form

View Additional Hierarchy Attributes

To view and edit additional data editor attributes:

  1. Select the data editor and view the data editor context form on the right

Duplicate a Data Editor

To duplicate a data editor:

  1. Select the data editor
  2. Click on the Duplicate button on the top toolbar

1.1.1.6 - Archive a Project

Create and Restore your project archives

Creating an Archive

Projects normally contain critical processes and logic, which are important to archive. If you ever need to restore the project to a specific state, having archives is essential.

PlaidCloud allows you to archive projects at any point in time. Creation of archives complements the built-in point-in-time tracking of PlaidCloud by allowing for specific points in time to be captured. This might be particularly useful before a major change or to capture the exact state of a production environment for posterity.

Full backup: This includes all the data tables included in a project. The archive may be quite large, depending on the volume of data in the project.

Partial backup: This can be used if all of the project data can be derived from other sources. If this is the case, it is not necessary to archive the data in the project and have it remain elsewhere. Partial archives save time and storage space when creating the archive.

To archive a project:

  1. Open Analyze
  2. Select the “Projects” tab

Restoring an Archive

Once you have an archive, you may want to restore it. You can restore an archive into a new project or into an existing project.

To restore an archive:

  1. Open Analyze
  2. Select the “Projects” tab

Archiving Schedule

Archives can also serve as a periodic backup of your project. PlaidCloud allows you to manage the backup schedule and set the retention period of the backup archives to whatever is most convenient or desired.

Since all changes to a project are automatically tracked, archiving is not necessary for rollback purposes. However, it does provide specific snapshots of the project state, which is often useful for control purposes and/or having the ability to recover to a known point.

To set an archiving schedule:

  1. Open Analyze
  2. Select the “Projects” tab
  3. Click the backup icon
  4. Choose a directory destination in a Document account
  5. Choose the backup frequency and retention
  6. Choose which items to backup
  7. Click “Update”

1.1.1.7 - Viewing the Project Log

View, sort and clear your project activities and assignments

Viewing and Sorting the Project Log

As actions occur within a project, such as assigning new members or running workflows, the Project Log stores the events. The Project Log consolidates the view of all individual workflow logs in order to provide a more comprehensive view of project activities. PlaidCloud also enables the viewer to sort and filter a Project Log and view details of a particular log entry.

To view the Project Log:

  1. Open Analyze
  2. Select “Projects”
  3. Click the log icon

To sort and filter the Project Log:

  1. Click the small icon to the right of the log and to the left of the “log message”
  2. Select desired guidelines

To view details of a particular log entry:

  1. Right click on the desired log entry
  2. View the “Log Message” box for details

Clearing the Project Log

Clearing the Project Log may be desirable from time to time

To clear the Project Log:

  1. Open Analyze
  2. Select “Projects”
  3. Click the log icon
  4. Click the “Clear Log” button

1.1.2 - Data Management

Within a project, you can create and modify tables, views, and dimensions.

1.1.2.1 - Using Tables and Views

Using and managing tables and views

Tabular data and information in PlaidCloud is stored in Greenplum data warehouses. This provides massive scalability and performance while using well understood and mature technology to minimize risk of data loss or corruption.

In addition, utilizing a data warehouse that operates with a common syntax allows 3rd party tools to connect and explore data directly. Essentially, this makes the PlaidCloud data ecosystem open and explorable while also ensuring industry leading security and access controls.

Tables

Tables hold the physical tabular data throughout PlaidCloud. Individual tables can hold many terabytes of data if needed. Data is stored across many physical servers and is automatically mirrored to ensure data integrity and high availability.

Tables consist of columns of various data types. Using an appropriate data type can help with performance and especially the storage size of your data. PlaidCloud can do a better job of compressing the data if the data is using the most appropriate data type too. This is usually guessed by PlaidCloud but it is also possible to change the data types using the column mappers in workflow steps.

Views

Views act just like tables but don't hold any physical data. They are logical representations of tables derived through a query. Using views can save on storage.

There are some limitations to the use of views though. Just be aware of the following:

  • View Stacking Performance - View stacking (view of a view of a view...etc) can impact performance on very large tables or complex calculations. It might be necessary to create intermediate tables to improve performance.
  • Dashboard Performance - While perfectly fine to publish a view for Dashboard use, for very large tables you may want to publish a table rather than a view for optimal user experience.
  • Dynamic Data - The data in a view changes when the underlying referenced table data changes. This can be both a benefit (everything updates automatically) or an unexpected headache if the desire was a static set of data.

1.1.2.2 - Table Explorer

Table Explorer provides powerful and readily accessible data exploration capabilities

Table Explorer provides a powerful and readily accessible data exploration tool with built in filtering, summarization, and other features to make life easy for people working with large and complex data.

Table Explorer supports exploration on any size dataset so you can use the same tool no matter how much your data grows. It also provides point-and-click filtering along with advanced filter capabilities to zero in on the data you need. The best part is that anywhere in PlaidCloud with tables or views, you can click on those tables and views to explore with Table Explorer. By being fully integrated, data access is only a click away.

The Grid view provides a tabular view of the data. The Details view provides a summary of each column, a count of unique values, and summary statistics for numeric columns.

Data can be exported directly from a filtered set as well as being able to save and share filters with others. Finally, the filters and column settings can be saved directly as a workflow Extract step.

The Grid View

The Grid view provides a tabular view of the data.

Setting the row limit

By default, the row limit is set to 5,000 rows. However, this can be adjusted or disabled entirely.

The rows shown along with the total size of the dataset are shown at the bottom of the table. The information provides three key pieces of information:

  1. The current row count shown based on the row limit applied
  2. The size of the global data after filters are applied
  3. The size of the unfiltered global data

Sorting locally versus globally

The Grid view provides the ability to click on the column header and sort the data based on that column. However, this method is only sorting the dataset that has already been retrieved and is not sorting based on the full dataset. If your retrieved data contains the entire dataset this distinction is immaterial however if your full dataset is larger than what appears in the browser, this may not be the desired sort result.

If you desire to sort the global dataset before retrieving the limited data that will appear in your browser those sorts can be applied to the columns in the Details view by clicking on the Sort icon at the top of each column. An additional benefit of using the global sort approach is that you can apply multiple sorts along with a mix of sort directions.

Quick reference column list

All of the columns in the table or view are shown on the left of the Table Explorer window by default. This column list can be toggled on and off using the column list toggle button.

The column list provides a number of quick access and useful features including:

  • Double clicking an item jumps to the column in the Grid or Details view
  • Control visibility of the column through the visibility checkbox
  • Use multi-select and right-click to include or exclude many columns at once
  • Quickly view the data type of each column using the data type icons
  • View the total column count

The Details View

The Details view provides an efficient way to view the data at a high level and exposes tools to quickly filter down to information with point-and-click operations.

Column data and unique counts

Each column is shown, provided it is currently marked as visible. The column summary displays the top 1,000 unique values by count. The number of unique values shown can be adjusted by selecting the Detailed Rows Displayed selection for a different value.

Managing point-and-click filters

Each column provides for point-and-click filtering by activating the filter toggle at the top of the column. Select the items in the column that you would like to include in the resulting data. Multi-select is supported.

Once you apply a filter, there may be items you wish to remove or to clear the entire column filter without clearing all filters. This is accomplished by selecting the dropdown on the column filter button and unchecking columns or selecting the clear all option at the top.

Managing Summarization

Summarization of the data can be applied by toggling the Summarize button to On. When the Summarize button is activated, each column will display a summarization type to apply. Adjust the summarization type desired for each column.

When the desired summarizations are complete, refresh the data and the summarizations will be applied.

Examples of summarization types are Min, Max, Sum, Count, and Count Distinct.

Finding Distinct Values

Activating the Distinct button will help reduce the data to only a set of unique records. When the Distinct button is active, a Distinct checkbox will appear on each column. Uncheck the columns that DO NOT define uniqueness of the column to the dataset. For example, if you want to find the unique set of customers in a customer order table, you would only want to select the customer column rather than including the customer order number too.

Summary statistics for numeric columns

Integer and numeric columns automatically display summary statistics at the bottom of the column information. This includes:

  • Min
  • Max
  • Mean
  • Sum
  • Standard Deviation
  • Variance

These statistics are calculated on the full filtered dataset.

Copying Data

It is sometimes useful to allow for copying of selected data from PlaidCloud so that it can be pasted into other applications such as a spreadsheet.

From the Copy button in the upper right, there are several copy options available for the data:

  • Copy All - Copies all of the data to the clipboard
  • Copy Selection - Copies the selected data to the clipboard
  • Copy Cell - Copies only the contents of a single cell to the clipboard
  • Copy Column - Copies the full contents of the column to the clipboard

Exporting Data

Exporting data from the Table Explorer interface allows exporting of the filtered data with only the columns visible. You can export in the following formats:

  • Microsoft Excel (xlsx)
  • CSV (Comma)
  • TSV (Tab)
  • PSV (Pipe)

The Download menu also offers the ability to download only the rows visible in the browser. This is based on using the row limit specified.

Additional Actions

Additional useful actions are available under the Actions menu.

Save as Extract Step

When exploring data, it is often in the context of determining how to filter it for a data pipeline process. This often consists of applying multiple filters including advanced filters to zero in on the desired result.

Instead of attempting to replicate all the filters, columns, summarizations, and sorts in an Extract Step, you can simply save the existing Table Explorer settings as a new Extract Step.

Save as View

Similar to saving the current Table Explorer settings as an Extract Step above, you can also save the settings directly as a view.

This can be particularly useful when trying to construct slices of data for reporting or other downstream processes that don't require a a data pipeline.

Manage Saved Filters

You never have to lose your filter work. You can save your Table Explorer settings as a saved filter. Saved filters also include column visibility, summarizations, columns filters, advanced filters, and sorts.

You can also let others use a saved filter by checking the Public checkbox when saving the filter.

From the Actions menu you can also choose to delete and rename saved filters.

Advanced Filters

While point-and-click column filters allow for quick application of filters to zero in on the desired results, sometimes filter conditions are complex and need more advanced specifications.

The advanced filter area provides both a pre-aggregation filter as well as a post-aggregation filter, if Summarize is enabled.

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.

1.1.2.3 - Using Dimensions (Hierarchies)

Using and managing hierarchical data

PlaidCloud natively manages dimension (i.e. hierarchical) data through our proprietary hierarchy storage system. We decided to construct our own from purpose-built solution because other commercial and open-source solutions seem to present limitations that were not easily overcome.

The hierarchy storage supports not only hierarchical relationships but also properties, aliases, attributes, and values. It is also designed to operate on large structures and perform operations quickly including complex branch and leaf navigation.

Dimensions are managed in the Dimensions tab within each PlaidCloud project configuration area.

Main Hierarchy

Each dimension (i.e. hierarchical dataset) always consists of a main hierarchy. Every member of the hierarchy is represented here.

Having a main hierarchy helps establish the complete set of leaf nodes in the dimension.

Alternate or Attribute Hierarchies

Alternate hierarchies are different representations of the main hierarchy leaf nodes. Alternate hierarchies can consist of a subset of both leaf nodes and roll-up (i.e. folders) in the main hierarchy as well as its own set of unique roll-ups.

This provides for the maximum amount of flexibility by automatically updating alternate hierarchies when children of a roll-up change or to strictly control the alternate hierarchy members by specifying only the leaf nodes required.

Managing Dimensions

Creating a Dimension

From the New button in the toolbar, select New Dimension. Enter in the desired name, directory, and a descriptive memo.

Once you press the Create button the dimension will be created and ready for immediate use.

You can also create a dimension from a workflow using the Dimension Create workflow step.

Deleting a Dimension

To delete an existing dimension, select the dimension record and open the Actions menu in the upper right. Select Delete Dimension.

This will delete the dimension and all underlying data.

You can also delete a dimension from a workflow using the Dimension Delete workflow step.

It is also possible to clear the dimension of all structure, values, aliases, properties, and alternate hierarchies without deleting the dimension by using the Dimension Clear workflow step.

Copying a Dimension

To copy an existing dimension, select the dimension record and open the Actions menu in the upper right. Select Copy Dimension.

This will open a dialog where you can specify the name of the copy. Click the Create Copy button to make a copy of the dimension including values, aliases, properties, and alternate hierarchies.

Sorting a Dimension

The dimension management area makes it easy to move hierarchy members up and down as well as changing parents. It also makes it easy to create and delete members.

However, it can get tedious when manually moving hierarchy items around so you can sort a dimension from a workflow using the Dimension Sort workflow step. This can be a big time saver especially after data loads or major changes.

Loading Dimensions

Since dimensions represent hierarchical data structures, the load process must convey the relationships in the data. PlaidCloud supports two different data structures for loading dimensions:

  • Parent-Child - The data is organized vertically with a Parent column and Child column defining each parent of a child throughout the structure
  • Levels - The data is organized horizontally with each column representing a level in the hierarchy from left to right

In addition to structure, other dimension information can be included in the load process such as values, aliases, and properties.

See the Workflow Step for Dimension Load for more information.

Dimension Property Inheritance

A dimension may inherit a property from an ancestor. To enable inheritance, click the dropdown next to Properties and select Inherited Properties. All child nodes in the dimension will now inherit the propties of its parents.

Usage Notes:

  • Inheritance will happen for all properties in a dimension. You cannot set inheritance on one property but not another.
  • If you change and then delete the value of a child property, it will default back to the parent value. You cannot have a null value when the parent has a value.
  • If you set the value of a child property, its children will inherit the child property instead of the parent.
  • Inheritance will go all the way down to the leaf node.

1.1.2.4 - Publishing Tables

Publishing Tables and Views to allow usage in Dashboard, PlaidXL, and other external reporting

Since data pipelines can generate many intermediate tables and views useful for validation and process checks but not suitable for final results reporting, PlaidCloud provides a Publish process to help reduce the noise when building Dashboards or pulling data in PlaidXL. The Publish process helps clarify which tables and views are final and reliable for reporting purposes.

Publish

From the Tables tab in a PlaidCloud project configuration, find the table you wish to publish for use in dashboards and PlaidXL. Right-click on the table record and select Set Published Table Reporting Name from the menu.

This will open a dialog where you can specify a unique published name. This name does not need to be the same as the table or view name. Enabling a different name is often useful when referencing data sources in dashboards and PlaidXL because it can provide a friendlier name to users.

Once the table or view is published, its published name will appear in the Published As column in the Tables view.

Unpublish

Unpublishing a table or view is similar to the publish process. From the Tables tab in a PlaidCloud project configuration, find the table you wish to publish for use in dashboards and PlaidXL. Right-click on the table record and select Set Published Table Reporting Name from the menu.

When the dialog appears to set the published name, select the Unpublish button. This will remove the table from Dashboard and PlaidXL usage.

The published name will no longer appear in the Published As column.

Renaming

Renaming a table or view is similar to the publish process. From the Tables tab in a PlaidCloud project configuration, find the table you wish to publish for use in dashboards and PlaidXL. Right-click on the table record and select Set Published Table Reporting Name from the menu.

When the dialog appears change the publish name to the new desired name. Press the Publish button to update the name.

The updated name will now appear in the Published As column as well as in Dashboard and PlaidXL.

1.1.3 - Workflows

A Workflow is a set of steps that load and transform data from raw state into a final form. There can be multiple workflows within a project, and those can be scheduled, run if conditions are met, or run manually. To view the workflows, open a project and go to the Workflows tab.

1.1.3.1 - Where are the Workflows

Create and Manage your own Workflows

Workflows exist within a Project. From the top menu in the Analyze menu click on the Projects menu item. This will open the Projects hierarchy showing the list of projects. Open the project and navigate to the Workflows tab to see the workflows in the project. Workflows are organized in a hierarchy.

The list of projects you can see is determined by your access security for each project and your Viewing Role within the project (i.e. Architect, Manager, or Explorer). If you are expecting to see a project and it is not present, it could be that you have not been granted access to the project by one of the project owners. If you are expecting to see certain workflows, but you are not an Architect on the project, then they might be hidden from your viewing role.

The status of the workflow will be displayed if it is running, has a warning or error, or was completed normally. The creation and update dates are also shown along with who created or updated the workflow.

The Workflow Explorer can be opened by double clicking on a workflow. You can then view the steps, execute a workflow or a part of a workflow, and so on.

1.1.3.2 - Workflow Explorer

View the details of your Workflows

To view the details within a workflow, find it in the project and then double click on it to open up the workflow in the explorer.

Workflow Explorer

From here, you can manage Workflow Steps including creating or modifying existing workflow steps, changing the order, executing steps, and so on.

1.1.3.3 - Create Workflow

Creating a new workflow

Once you navigate to the Workflows tab in a project, click on the New Workflow button. This will open a form where you can enter in the details of the workflow including the name and memo.

In addition, you can set a remediation workflow to run if the workflow ends in an error. A remediation workflow does not need to be set but can be useful for sending notifications or triggering other processes that may automatically remediate failures.

Once the form is complete, click on the Create button and the new workflow will be added to the project.

1.1.3.4 - Duplicate a Workflow

Making a duplicate copy of a workflow

It may be useful to copy a workflow when planning to make major changes or to replicate the process with different options. Duplicating an entire workflow is very easy in PlaidCloud. Simply select the workflows you would like to duplicate in the Workflows table of a selected project and click the Duplicate Selected Workflows button at the top of the table. This will copy the workflows and append the word Copy to the name.

Once the duplication process is complete, the workflow is fully functional. Copied workflows are completely separate from the original and can be modified without impacting the original workflow.

1.1.3.5 - Copy & Paste steps

Copy and paste steps within and across workflows

Copy Steps

It is often useful to copy steps instead of starting from scratch each time. PlaidCloud allows copying steps within workflows as well as between workflows, and even in other projects. You can select multiple steps to copy at once. Select the workflow steps within the hierarchy and click the Copy Selected Steps button at the top of the table.

This will place the selected steps in the clipboard and allow pasting within the current workflow or another one.

Copying a step will make a duplicate step within the project. If you want to place the same step in more than one location in a workflow, use the Add Step menu option to add a reference to the same step rather than a clone of the original step.

Paste Steps

After selecting steps to copy and placing them on the clipboard, you can paste those steps into the same workflow or another workflow, even in another project. There are two options when pasting the steps into the workflow:

  • Append to the end of the workflow
  • Insert after last selected row

The append option will simply append the steps to the end of the selected workflow. The insert option will insert the copied steps after the selected row. Note that if multiple steps have been copied to the clipboard from multiple areas in a workflow, that pasting them will paste them in order but will not have any nested hierarchy information from when they were copied. The pasting will be a flat list of steps to insert only. This might be unexpected but is safer than creating all of the directory structure in the target workflow that existed in the source workflow.

1.1.3.6 - Change the order of steps in a workflow

Move steps up and down in a workflow to control the flow of execution

There are two ways to update the order of steps in the workflow. The first way is to use the up and down arrows present in the Workflows table to move the step up or down. The second way is to use the Step Move option which allows you to move the step much easier if large changes are necessary. The step move option allows you to move the step to the top, bottom, or after a specific step in one operation.

1.1.3.7 - Run a workflow

How to run a workflow from the workflow management area

You can trigger a full workflow run by either clicking on the run icon from the Workflows hierarchy or by selecting Run All from the Actions menu within a specific workflow.

You can also click on the Toggle Start/Stop button at the top of the workflow table. This toggle button will stop a running workflow or start a workflow.

1.1.3.8 - Running one step in a workflow

Execute a single step within a workflow

During initial workflow development, testing, or troubleshooting, it is often useful to run steps individually. To run a single step in isolation, right click on the step and select Run Step from the context menu.

1.1.3.9 - Running a range of steps in a workflow

How to run a selected range of steps together as mini-workflow

While running individual steps is useful, it also may be useful to run subsets of an entire workflow for development, testing, or troubleshooting. To run a subset of steps, select all the steps you would like to run and select Run Selected from the Actions menu at the top of the workflow steps hierarchy. This will trigger a normal workflow processing but start the workflow at the beginning of the selected steps and stop once the last selected step is complete.

1.1.3.10 - Managing Step Errors

Control the behavior of a step when errors occur

If a workflow experiences an error during processing, an error indicator is displayed on both the workflow and the step that had the error. PlaidCloud can retry a failed step multiple times. This is often useful if the step is accessing remote systems or data that may not be highly available or intermittently fail for unknown reasons. The retry capability can be set to retry many times as well as add a delay between retries from seconds to hours.

If no retry is selected or the maximum number of retries is exceeded, then the step will be marked as an error. PlaidCloud provides three levels of error handling in that case:

  • Stop the workflow when an error occurs
  • Mark the step as an error but keep processing the workflow
  • Mark the step as an error and trigger a remediation workflow process instead of continuing the current workflow

Stop the Workflow

Stopping the workflow when a step errors is the most common approach since workflows generally should run without errors. This will stop the workflow and present the error indicator on both the step and the workflow. The error will also be displayed in the activity monitor but no further action is taken.

Keep Processing

Each step can be set to continue on error in the step form. If this checkbox is enabled, then any step will be marked with an error if it occurs, but the workflow will treat the error as a completion of the step and continue on. This is often useful if there are steps that perform tasks that can error when there is missing data but are harmless to the overall processes.

Since the workflow is continuing on error under this scenario the workflow will not display an error indicator and continue to show a running indicator.

Trigger Remediation Workflow

With the ability to set a remediation workflow as part of the workflow setup, a workflow error will immediately stop the processing of the current workflow and start processing the remediation workflow. Note that if a step is marked to continue on error that a failure will not trigger the remediation workflow. Only steps that fail that would also cause the entire workflow to stop will trigger the remediation process.

A remediation workflow may be useful for simply notifying people that a failure has occurred or it can perform other complex processing to attempt an automatic correction of any underlying reasons the original workflow failed.

1.1.3.11 - Continue on Error

Set the workflow to continue even when an error occurs

Workflow steps can be set to continue processing even when there is an error. This might be useful in workflow start-up conditions or where data may be available intermittently. If the step errors, it will be recorded as an error but the workflow will continue to process.

To set this option, click on the step edit option, the pencil icon in the workflow table, to open the edit form. Check the checkbox for Continue On Error. After saving the updated step, any errors with the step will not cause the workflow to stop.

Steps that have been set to continue on error will have a special indicator in the workflow steps hierarchy table.

1.1.3.12 - Skip steps in a workflow

How to disable steps in a workflow so they are not executed

Steps in the workflow can be set to skip during the workflow run. This may be useful if there are debugging steps or old steps that you are not prepared to completely remove from the workflow yet. To set this option, you have two options:

  • Edit the step form
  • Uncheck the enabled checkbox in the workflow hierarchy

To edit the step form, click on the step edit option, the pencil icon in the workflow table, to open the edit form. Uncheck the enabled checkbox. After saving the updated step it will no longer run as part of the workflow but can still be run using the single step run process.

Steps that have been set to disabled will have a disabled indicator in the workflow steps hierarchy table.

1.1.3.13 - Conditional Step Execution

Control if a step is executed in a workflow based on a set of conditions

Overview

Workflow steps normally execute in the defined order for the workflow. However, it is often useful to have certain steps only execute if predefined conditions are met. By using the step conditions capability you can control execution based on the following options:

  • Variable values
  • Table has rows or is empty
  • A document or folder exists in Document
  • A document or folder is missing in Document
  • Table query result
  • Date and time conditions are met

For variables or table query result comparisons you can use the following comparisons:

  • Equal
  • Does not equal
  • Contains
  • Does not contain
  • Starts with
  • Ends with
  • Greater than
  • Less than
  • Greater than or equal
  • Less than or equal

What is also important to note is that you can have multiple conditions that must be met in order for the step to execute. This provides a powerful tool for controlling exactly when a step should execute.

Adding and Controlling Conditions

To activate and add conditions on a step:

  1. Find the step you want to add a condition on
  2. Click the Edit Step Details (pencil) icon
  3. Select the Conditions tab.
  4. Check the Check Conditions Before Running checkbox to enable the dialog and add conditions.
  5. In the Condition Checks section on the left, select the "+" to add a New Condition
  6. Add a condition from the tabbed section on the right
  7. Repeat steps 5,6 as needed to add all your conditions

Managing Conditions

You can add as many conditions as necessary in the Conditions Check section. As you add them, it is a good idea to give them a useful name so you can find the conditions easily in the future.

Once you add a condition, select it on the left and the condition evaluation criteria will be editable on the right.

Variable Conditions

When checking variable conditions, the Value Check Parameters section must be completed so a comparison can be made.

In the Variable or Table Field fill in the variable name. Select a comparison type and enter a comparison value.

Basic Table Conditions

If the condition is checking whether a table has rows or is empty, you will also need to define the table in the Table Data Selection tab.

Advanced Table Conditions

When using Advanced Table conditions, the Value Check Parameters section must be completed so a comparison can be made.

In the Variable or Table Field fill in the field name from the table selection. Select a comparison type and enter a comparison value.

In the Table Data Selection tab, select the table and complete the data mapping section with at least the field referenced for the condition comparison.

Document Path Conditions

If the condition is checking whether a document or folder exists, this requires picking the Document account and specifying the document path to check in the Document Path tab.

Date and Time Conditions

For Date or Time selections you can add multiple conditions if a combination of conditions is necessary. For example, if you only wanted a step to run on Mondays at 2:05am, you would create three conditions:

  • Day of the week condition set to Monday (1)
  • Hour of the day set to 2
  • Minute of the hour set to 5

For "Use Financial Close Workday", set that to the xth day of the month that your close happens on. For example, if your close happens on the 5th day of the month, have "5".

1.1.3.14 - Controlling Parallel Execution

How to control serial versus parallel execution of steps in a workflow

Workflows in PlaidCloud can be executed as a combination of serial steps and parallel operations. To set a group of steps to run in parallel, place the steps in a group within the workflow hierarchy. Right click on the group folder and select the Execute in Parallel option. This will allow all the steps in the group to trigger simultaneously and execute in parallel. Once all steps in the group complete, the next step or group in the workflow after the group will activate.

1.1.3.15 - Manage Workflow Variables

Create, view, and set workflow variable values

PlaidCloud allows variables at both the project scope and workflow scope. This allows for setting project wide variables or being able to pass information easily between workflows. The variables and values are viewed by clicking on the variables icon in the Workflows hierarchy.

From the variables table you can view the variables, the current values, and edit the values. You can also add new variables or delete existing ones.

1.1.3.16 - Viewing Workflow Log

How to view and analyze the workflow log

Viewing the Workflow Log

As things happen within a workflow, such as steps running or warnings occurring, those events are logged to the workflow log. This log is viewable from the Project area under the Log tab. The workflow log is also present in the project log in case you would like to see a more comprehensive view of logs across multiple workflows.

The log viewer allows for sorting and filtering the log as well as viewing the details of a particular log entry.

Clearing the Workflow Log

Clearing the workflow log may be desirable from time to time. From the log viewer, select the Clear Log button. This will clear the log based on the workflow selected which will also remove the log entries from the project level log too.

1.1.3.17 - View Workflow Report

Get a summary report of the workflow and settings

Maintaining detailed documentation to support both statutory and management requirements is challenging when the projects and workflows may be dynamic. To help solve this problem, PlaidCloud provides a Workflow level report that provides detailed documentation of workflows, workflow steps, user defined functions, and variables.

The report is generated on-demand and reflects the current state of the workflow. To download the report click on the Report icon in the Workflows hierarchy.

1.1.3.18 - View a dependency audit

View all the data dependencies within a workflow

The Workflow Dependency Audit is a very helpful tool to understand data and workflow dependencies in complex interconnected workflows. Over time, as workflow processes become more complex, it may become challenging to ensure all dependencies are in the correct order. When data already exists in tables, steps will run and appear correct in many cases but may actually have a dependency issue if the data is populated out of order.

This tool will provide a dependency audit and identify issues with data dependency relationships.

1.1.4 - Workflow Steps

A Workflow Step is an individual action made within a workflow, such as load from a csv file, insert data into a table, or notify a user via SMS that an error condition occurred. To view the steps in a workflow, go to a project and the Workflow tab, and open a workflow to view all its steps.

1.1.4.1 - Workflow Control Steps

1.1.4.1.1 - Create Workflow

Create a new workflow in 'Analyze'

Description

Create a new PlaidCloud Analyze workflow.

Workflow to Create

First, select the Project in which the new workflow should be created from the dropdown menu.

Next, type in a workflow name. The name should be unique to the Project.

Examples

No examples yet...

1.1.4.1.2 - Run Workflow

Run an existing workflow

Description

“Run Workflow” runs an existing workflow.

Workflow to Run

First, select the Project which contains the workflow to be run from the Project dropdown menu.

Next, select the particular workflow to be run from the Workflow dropdown menu.

Additionally, there is an option to Wait until processing completes before continuing. Selecting this checkbox will defer execution of the current workflow until the called workflow is completed with its execution. By default, this option is disabled, meaning that the current workflow in which this transform resides will continue processing in parallel along with the called workflow.

Examples

No examples yet...

1.1.4.1.3 - Stop Workflow

Stop an existing, running workflow

Description

“Stop Workflow” stops an existing, running workflow.

Workflow to Stop

First, select the Project which contains the workflow to be stopped from the Project dropdown menu.

Next, select the particular workflow to be stopped from the Workflow dropdown menu.

Examples

No examples yet...

1.1.4.1.4 - Copy Workflow

Make a copy of an existing PlaidCloud Analyze workflow

Description

Make a copy of an existing PlaidCloud Analyze workflow.

Workflow to Copy

First, select the Project which contains the workflow to be copied from the Project dropdown menu.

Next, select the particular workflow to be copied from the Workflow dropdown menu.

Next, enter the new workflow name into the New Workflow field. Remember: the name should be unique to the Project.

Examples

No examples yet...

1.1.4.1.5 - Rename Workflow

Rename an Existing PlaidCloud Analyze Workflow

Description

Rename an existing PlaidCloud Analyze workflow.

Workflow to Rename

First, select the Project which contains the workflow to be renamed from the Project dropdown menu.

Next, select the particular workflow to be renamed from the Workflow dropdown menu.

Finally, enter the new workflow name in the Rename To field. Remember that the name should be unique to the Project.

Examples

No examples yet...

1.1.4.1.6 - Delete Workflow

Delete an existing PlaidCloud Analyze workflow

Description

Delete an existing PlaidCloud Analyze workflow.

Workflow to Delete

First, select the Project which contains the workflow to be deleted from the Project dropdown menu.

Next, select the particular workflow to be deleted from the Workflow dropdown menu.

Examples

No examples yet...

1.1.4.1.7 - Set Project Variable

Set a project variable for use during a workflow

Description

“Set Project Variable” sets project variables for use during the workflow. A variable name and value may contain any combination of valid characters, including spaces. Variables are referenced within the workflow by placing them inside curly braces. For example, a_variable is referenced within a transform as {a_variable} so it could be used in something like a formula or field value (e.g., {a_variable} * 2).

Variable List

The table will display the list of registered project variables and the current values. Enter the value for the variable desired. It’s also possible to set variable values without registering the variable first by simply adding the variable to the list.

Examples

No examples yet...

1.1.4.1.8 - Set Workflow Variable

Set variables during a workflow

Description

“Set Workflow Variable” sets workflow variables for use during the workflow. A variable name and value may contain any combination of valid characters, including spaces. Variables are referenced within the workflow by placing them inside curly braces. For example, a_variable is referenced within a transform as {a_variable} so it could be used in something like a formula or field value (e.g. {a_variable} * 2).

Variable List

The table will display the list of registered workflow variables and the current values. Enter the value for the variable desired. It’s also possible to set variable values without registering the variable first by simply adding the variable to the list.

Examples

No examples yet...

1.1.4.1.9 - Worklow Loop

Runs a workflow looping over a dataset as Project variables

Description

Loops over a dataset and runs a specific workflow using the values of the looping dataset as Project variables.

Workflow to Stop

First, select the Project which contains the workflow that will be run on each loop from the Project dropdown menu.

Next, select the particular workflow for running from the Workflow dropdown menu.

Examples

Examples coming soon

1.1.4.1.10 - Raise Workflow Error

Raises an error in a workflow

Description

Raise an error in a PlaidCloud Analyze workflow.

Raise Workflow Error

Mainly for use with step conditions, the step can be set to execute if conditions are met and raise an error within the workflow

1.1.4.1.11 - Clear Workflow Log

Clear the Log from an existing PlaidCloud 'Analyze' Workflow

Description

Clear the log from an existing PlaidCloud Analyze workflow.

Workflow Log to Clear

First, select the Project which contains the workflow log to be cleared from the Project dropdown menu.

Next, select the particular workflow log to be cleared from the Workflow dropdown menu.

1.1.4.2 - Import Steps

1.1.4.2.1 - Import Archive

Import an archived project

Description

Imports PlaidCloud table archive.

Examples

No examples yet...


Import Parameters

Import Source and Target

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Source Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

Source FilePath

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

1.1.4.2.2 - Import CSV

Description

Import delimited text files from PlaidCloud Document. This includes, but is not limited to, the following delimiter types:

  • comma (, )
  • pipe (|)
  • semicolon (; )
  • tab
  • space ( )
  • at symbol (@)
  • tilda (~)
  • colon (:)

Examples

No examples yet...


Import Parameters

Import Source and Target

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Source Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

Source FilePath

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Inspect Selected Source File

By pressing the Guess Settings from Source File button, PlaidCloud will open the file and inspect it to attempt to determine the data format. Always check the guessed settings to make sure they seem correct.

Data Format

Delimiter

As mentioned above, Inspect Source File will attempt to determine the delimiter in the source file. If another delimiter is desired, use this section to specify the delimiter. Users can choose from a list of standard delimiters.

  • comma (, )
  • pipe (|)
  • semicolon (; )
  • tab
  • space ( )
  • at symbol (@)
  • tilda (~)
  • colon (:)

Header Type

Since CSVs may or may not contain headers, PlaidCloud provides a way to either use the headers, ignore headers, or use column order to determine the column alignment.

  • No Header: The CSV file contains no header. Use the source list in the Data Mapper to determine the column alignment
  • Has Header - Use Header and Override Field List: The CSV file has a header. Use the header names specified and ignore the source list in the Data Mapper.
  • Has Header - Skip Header and Use Field List Instead: The CSV file has a header but it should be ignored. Use the header names specified by the source list in the Data Mapper.

Date Format

This setting is useful if the dates contained in the CSV file are not readily recognizable as dates and times. The import process attempts to convert dates but having a little extra information can help in the import process.

Special Characters

The special character inputs control how PlaidCloud handles the presence of certain characters and what they mean in the context of processing the CSV

  • Quote Character: This is the character used to indicate an enclosed set of text that should be processed as a single field
  • Escape Character: This is the character used to indicate the following character should be processed as it is and not interpreted as a special character. Useful when field may contain the delimiter.
  • Null Character: Since CSVs don't have data types, this character provides a way to indicate that the value should be NULL rather than an empty string or 0.
  • Trailing Negatives: Some source systems generate negative numbers with trailing negative symbols instead of prefixing the negative. This setting will process those as negative numbers.

Row Selection

For input files with extraneous records, you can specify a number of rows to skip before processing the data. This is useful if files contain header blocks that must be skipped before arriving at the tabular data.

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.3 - Import Excel

Import worksheets from Excel files within PlaidCloud Document

Description

Import specific worksheets from Microsoft Excel files from PlaidCloud Document. Analyze supports the legacy Excel format (XP/2003) as well as the new format (2007/2010/2013). This includes, but is not limited to, the following file types:

  • XLS
  • XLSX
  • XLSB
  • XLSM

Examples

No examples yet...


Import Parameters

Import Source and Target

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Source Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

Source FilePath

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Since Excel files may or may not contain headers, PlaidCloud provides a way to either use the headers, ignore headers, or use column order to determine the column alignment.

  • No Header: The file contains no header. Use the source list in the Data Mapper to determine the column alignment
  • Has Header - Use Header and Override Field List: The file has a header. Use the header names specified and ignore the source list in the Data Mapper.
  • Has Header - Skip Header and Use Field List Instead: The file has a header but it should be ignored. Use the header names specified by the source list in the Data Mapper.

Row Selection

For input files with extraneous records, you can specify a number of rows to skip before processing the data. This is useful if files contain header blocks that must be skipped before arriving at the tabular data.

Worksheets to Import

Because workbooks may contain many worksheets with different data, it is possible to select which worksheets should be imported in the current import process. The options are:

  • All Worksheets
  • Worksheets Matching Search
  • Selected Worksheets

The search functionality for worksheets allows inclusion of worksheets matching the search criteria. The search criteria allows for:

  • Starts With: The worksheet name starts with the search text
  • Contains: The worksheet name contains the search text
  • Ends With: The worksheet name ends with the search text

Find Sheets in Selected File

The find sheets button will open the Excel file and list the worksheets available in the table. Mark the checkboxes in the table for the worksheets to be included in the import.

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.4 - Import External Database Tables

Import all or a subset of tables in an external database

Description

Includes ability to perform delta loads and map to alternate target table names.

Examples

No examples yet...


Unique Configuration Items

None


Common Configuration Items

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Import File Selector

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

File or Directory Selection Option

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.5 - Import Fixed Width

Import Fixed Width files

Description

Imports fixed-width files.

Examples

No examples yet…


Import Parameters

Import Source and Target

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Source Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

Source FilePath

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Since Excel files may or may not contain headers, PlaidCloud provides a way to either use the headers, ignore headers, or use column order to determine the column alignment.

  • No Header: The file contains no header. Use the source list in the Data Mapper to determine the column alignment
  • Has Header - Use Header and Override Field List: The file has a header. Use the header names specified and ignore the source list in the Data Mapper.
  • Has Header - Skip Header and Use Field List Instead: The file has a header but it should be ignored. Use the header names specified by the source list in the Data Mapper.

Row Selection

For input files with extraneous records, you can specify a number of rows to skip before processing the data. This is useful if files contain header blocks that must be skipped before arriving at the tabular data.

Column Widths

Enter the widths of the columns seperated with commas or spaces.

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.6 - Import Google BigQuery

Import Google BigQuery files

Description

Import Google BigQuery files.

Examples

No examples yet...


Unique Configuration Items

Coming soon...


Common Configuration Items

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Import File Selector

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

File or Directory Selection Option

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.7 - Import Google Spreadsheet

Import specific worksheets from Google Spreadsheet files

Description

Import specific worksheets from Google Spreadsheet files.

Examples

No examples yet...


Import Parameters

Import Google Spreadsheet

Source And Target

Google Account

Accessing Google Spreadsheet data requires a valid Google user account. This requires set up in Tools. For details on setting up a Google account connection, see here: PlaidCloud Tools – Connection.

Once all necessary accounts have been set up, select the appropriate Google Account from the drop down list.

Spreadsheet

Next, specify the Spreadsheet to import from the dropdown menu containing all available files associated with the specified Google Account.

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Header Type

Since Google Spreadsheets may or may not contain headers, PlaidCloud provides a way to either use the headers, ignore headers, or use column order to determine the column alignment.

  • No Header: The file contains no header. Use the source list in the Data Mapper to determine the column alignment
  • Has Header - Use Header and Override Field List: The file has a header. Use the header names specified and ignore the source list in the Data Mapper.
  • Has Header - Skip Header and Use Field List Instead: The file has a header but it should be ignored. Use the header names specified by the source list in the Data Mapper.

Worksheets to Import

Because workbooks may contain many worksheets with different data, it is possible to select which worksheets should be imported in the current import process. The options are:

  • All Worksheets
  • Worksheets Matching Search
  • Selected Worksheets

The search functionality for worksheets allows inclusion of worksheets matching the search criteria. The search criteria allows for:

  • Starts With: The worksheet name starts with the search text
  • Contains: The worksheet name contains the search text
  • Ends With: The worksheet name ends with the search text

Find Sheets in Selected File

The find sheets button will open the Excel file and list the worksheets available in the table. Mark the checkboxes in the table for the worksheets to be included in the import.

Column Headers

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.


1.1.4.2.8 - Import HDF

Import HDF5 files from PlaidCloud Document

Description

Import HDF5 files from PlaidCloud Document.

For more details on HDF5 files, see the HDF Group’s official website here: http://www.hdfgroup.org/HDF5/.

Examples

No examples yet...


Unique Configuration Items

Key Name

HDF files store data in a path structure. A key (path) is needed as the destination for the table within the HDF file. In most situations, this will be table.


Common Configuration Items

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Import File Selector

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

File or Directory Selection Option

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.9 - Import HTML

Import HTML table data from the internet

Description

Import HTML table data from the internet.

Examples

No examples yet...


Unique Configuration Items

Select Tables in HTML

Since it is possible to have multiple tables on a web page, the user must specify which table to import. To do so, specify Name and/or Attribute values to match.

For example, consider the following table:

<table border="1" id="import"> <tr> <th>Hello</th><th>World</th> </tr> <tr> <td>1</td><td>2</td> </tr> <tr> <td>3</td><td>4</td> </tr> </table>

To import this table, specify id:import in the Name Match field.

Additionally, there is an option to skip rows at the beginning of the table.

Column Headers

Specify the row to use for header information. By default, the Column Header Row is 0.


Common Configuration Items

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Import File Selector

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

File or Directory Selection Option

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.10 - Import JSON

Import JSON text files from PlaidCloud Document

Description

Import JSON text files from PlaidCloud Document.

For more details on JSON files, see the JSON official website here: http://json.org/.

JSON files do not retain column order. The column order in the source file does not necessarily reflect the column order in the imported data table.

Examples

No examples yet...


Unique Configuration Items

JSON Data Orientation

Consider the following data set:

| ID | Name | Gender | State | | 1 | Jack | M | MO | | 2 | Jill | F | MO | | 3 | George | M | VA | | 4 | Abe | M | KY |

JSON files can be imported from one of three data formats:

  • Records: Data is stored in Python dictionary sets, with each row stored in {Column -> Value, …} format. For example:
[{ "ID": 1, "Name": "Jack", "Gender": "M", "State": "MO" }, { "ID": 2, "Name": "Jill", "Gender": "F", "State": "MO" }, { "ID": 3, "Name": "George", "Gender": "M", "State": "VA" }, { "ID": 4, "Name": "Abe", "Gender": "M", "State": "KY" }]
  • Index: Data is stored in nested Python dictionary sets, with each row stored in {Index -> {Column -> Value, …},…} format. For example:
{ "0": { "ID": 1, "Name": "Jack", "Gender": "M", "State": "MO" }, "1": { "ID": 2, "Name": "Jill", "Gender": "F", "State": "MO" }, "2": { "ID": 3, "Name": "George", "Gender": "M", "State": "VA" }, "3": { "ID": 4, "Name": "Abe", "Gender": "M", "State": "KY" } }
  • Split: Data is stored in a single Python dictionary set, values stored in lists. For example:
{ "columns": ["ID", "Name", "Gender", "State"], "index": [0, 1, 2, 3], "data": [ [1, "Jack", "M", "MO"], [2, "Jill", "F", "MO"], [3, "George", "M", "VA"], [4, "Abe", "M", "KY"] ] }

Common Configuration Items

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Import File Selector

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

File or Directory Selection Option

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.11 - Import Project Table

Import table data from a different project

Description

Import table data from a different project.


Data Sharing Management

Data Sharing Management

In order to import a table from another project you must first go to both projects Home Tab and allow the projects to share data with each other. To do this select New Data Share and select the project and give them Read access.

Import External Project Table

Import Source and Target

Read From

Select the Source Project and Source Table from the drop downs.

Write To

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

1.1.4.2.12 - Import Quandl

Imports data sets from Quandl’s repository of millions of data sets

Description

Imports data sets from Quandl’s repository of millions of data sets.

For more details on Quandl data sets, see the Quandl official website here: http://www.quandl.com/.

Examples

No examples yet...


Unique Configuration Items

Source Data Specification

Accessing Quandl data sets requires a user account or a guest account with limited access. This requires set up in Tools. For details on setting up a Quandl account connection, see here: PlaidCloud Tools – Connection.

Once all necessary accounts have been set up, select the appropriate account from the drop down list.

Next, enter criteria for the desired Quandl code. Users can use the Search functionality to search for data sets. Alternatively, data sets can be entered manually. This requires the user to enter the portion of the URL after “http://www.quandl.com”.

For example, to import the data set for Microsoft stock, which can be found here (http://www.quandl.com/GOOG/NASDAQ_MSFT), enter GOOG/NASDAQ_MSFT in the Quandl Code field.

Data Selection

It is possible to slice Quandl data sets upon import. Available options include the following:

  • Start Date: Use the date picker to select the desired date.
  • End Date: Use the date picker to select the desired date.
  • Collapse: Aggregate results on a daily, weekly, monthly, quarterly, or annual basis. There is no aggregation by default.
  • Transformation: Summary calculations.
  • Limit Rows: The default value of 0 returns all rows. Any other positive integer value will specify the limit of rows to return from the data set.

Common Configuration Items

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.13 - Import SAS7BDAT

Import SAS table files from PlaidCloud Document

Description

Import SAS table files from PlaidCloud Document.

Examples

No examples yet...


Unique Configuration Items

None


Common Configuration Items

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Import File Selector

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

File or Directory Selection Option

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.14 - Import SPSS

Import SPSS sav and zsav files from PlaidCloud Document

Description

Import SPSS sav and zsav files from PlaidCloud Document.

Examples

No examples yet...


Unique Configuration Items

None


Common Configuration Items

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Import File Selector

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

File or Directory Selection Option

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.15 - Import SQL

Import data from a remote SQL database.

Description

Import data from a remote SQL database.


Import Parameters

Import SQL Table


Source And Target

Database Connection

To establish a Database Connection please refer to PlaidCloud Data Connections

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

SQL Query

In this section write the SQL query to return the required data.

Column Type Guessing

SQL Imports have the option of attempting to guess the data type during load, or to set all columns to type Text. Setting the data types dynamically can be quicker if the data is clean, but can cause issues in some circumstances.

For example, if most of the data appears to be numeric but there is some text as well, it may try to set it as numeric causing load issues with mismatched data types. Or there could be issues if there is a numeric product code that is 16 digits, for example. It would crop the leading zeroes resulting in a number instead of a 16 digit code.

Setting the data to all text, however, requires a subsequent Extract step to convert any data types that shouldn't be text to the appropriate type, like dates or numerical values.


1.1.4.2.16 - Import Stata

Import Stata files from PlaidCloud Document

Description

Import Stata files from PlaidCloud Document.

Examples

No examples yet...


Unique Configuration Items

None


Common Configuration Items

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Import File Selector

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

File or Directory Selection Option

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.2.17 - Import XML

Import XML data as an XML file

Description

Import XML data as an XML file.

Examples

No examples yet...


Unique Configuration Items

None


Common Configuration Items

Remove non-ASCII Characters Option

By selecting this option, the import will remove any content that is not ASCII. While PlaidCloud fully supports Unicode (UTF-8), real-world files can contain all sorts of encodings and stray characters that make them challenging to process.

If the content of the file is expected to be ASCII only, checking this box will help ensure the import process runs smoothly.

Delete Files After Import Option

This option will allow the import process to delete the file from the PlaidCloud Document account after a successful import has completed.

This can be useful if the import files are generated can be recreated from a system of record or there is no reason to retain the raw input files once they have been processed.

Import File Selector

The file selector in this transform allows you to choose a file stored in a PlaidCloud Document location for import.

You can also choose a directory to import and all files within that directory will be imported as part of the transform run.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory or file in the next selection.

Search Option

The Search option allows for finding all matching files below a specified directory path to import. This can be particularly useful if many files need to be included but they are stored in nested directories or are mixed in with other files within the same directory which you do not want to import.

The search path selected is the starting directory to search under. The search process will look for all files within that directory as well as sub-directories that match the search conditions specified. Ensure the search criteria can be applied to the files within the sub-directories too.

The search can be applied using the following conditions:

  • Exact: Match the search text exactly
  • Starts With: Match any file that starts with the search text
  • Contains: Match any file that contains the search text
  • Ends With: Match any file that ends with the search text

File or Directory Selection Option

When a specific file or directory of files are required for import, picking the file or directory is a better option than using search.

To select the file or directory, simply use the browse button to pick the path for the Document account selected above.

Variable Substition

For both the search option and specific file/directory option, variables can be used with in the path, search text, and file names.

An example that uses the current_month variable to dynamically point to the correct file:

legal_entity/inputs/{current_month}/ledger_values.csv

Target Table

The target selection for imports is limited to tables only since views do not contain underlying data.

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the target for the import, leave the Dynamic box unchecked and select the target Table.

If the target Table does not exist, select the Create new table button to create the table in the desired location.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.3 - Export Steps

1.1.4.3.1 - Export to CSV

Export an Analyze data table to PlaidCloud Document as a CSV delimited file

Description

Export an Analyze data table to PlaidCloud Document as a CSV delimited file.

Export Parameters

Export File Selector

Export File Selector

The file selector in this transform allows you to choose a destination store the exported result in a PlaidCloud Document.

You choose a directory and specify a file name for the target file.

Source Table

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to source table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the source for the export, leave the Dynamic box unchecked and select the source table.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory next selection.

Target Directory Path

Select the Browse icon to the right of the Target Directory Path and navigate to the location you want the file saved to.

Target File Name

Specify the name the exported file should be saved as.

Selecting File Compression

All exported files are uncompressed, but the following compression options are available:

  • No Compression
  • Zip
  • GZip
  • BZip2

Data Format

Export CSV Data Format

Delimiter

The Export CSV transform is used to export data tables into delimited text files saved in PlaidCloud Document. This includes, but is not limited to, the following delimiter types:

  • Excel CSV (comma separated)

  • Excel TSV (tab separated)

  • User Defined Separator –>

    • comma (,)
    • pipe (|)
    • semicolon (;)
    • tab
    • space ( )
    • other/custom (tilde, dash, etc)

To specify a custom delimiter, select User Defined Separator –> and then Other –>, and type the custom delimiter into the text box.

Special Characters

The Special Characters section allows users to specify how to handle data with quotation marks and escape characters. Choose from the following settings:

  • Special Characters (QUOTE_MINIMAL): Quote fields with special characters (anything that would confuse a parser configured with the same dialect and options). This is the default setting.
  • All (QUOTE_ALL): Quote everything, regardless of type.
  • Non-Numeric (QUOTE_NONNUMERIC): Quote all fields that are not integers or floats. When used with the reader, input fields that are not quoted are converted to floats.
  • None (QUOTE_NONE): Do not quote anything on output. Quote characters are included in output with the escape character provided by the user. Note that only a single escape character can be provided.

Write Header To First Row

If this checkbox is selected the table headers will be exported to the first row. If it is not there will be no headers in the exported file.

Include Data Types In Headers

If this checkbox is selected the headers of the exported file will contain the data type for the column.

Windows Line Endings

Lastly, the Use Windows Compatible Line Endings checkbox is selected by default to ensure compatibility with Windows systems. It is advisable to leave this setting on unless working in a unix-only environment.

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

For more aggregation details, see the Analyze overview page here.

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

No examples yet...

1.1.4.3.2 - Export to Excel

Export an Analyze data table to PlaidCloud Document as a Microsoft Excel file

Description

Export an Analyze data table to PlaidCloud Document as a Microsoft Excel file. PlaidCloud Analyze supports modern versions of Microsoft Excel (2007-2016) as well as legacy versions (2000/2003).

Export Parameters

Export File Selector

Export File Selector

The file selector in this transform allows you to choose a destination store the exported result in a PlaidCloud Document.

You choose a directory and specify a file name for the target file.

Source Table

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to source table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the source for the export, leave the Dynamic box unchecked and select the source table.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory next selection.

Target Directory Path

Select the Browse icon to the right of the Target Directory Path and navigate to the location you want the file saved to.

Target File Name

Specify the name the exported file should be saved as.

Target Sheet Name

Specify the target sheet name, the default is Sheet1

Selecting File Compression

All exported files are uncompressed, but the following compression options are available:

  • No Compression
  • Zip
  • GZip
  • BZip2

Write Header To First Row

If this checkbox is selected the table headers will be exported to the first row. If it is not there will be no headers in the exported file.

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

For more aggregation details, see the Analyze overview page here.

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

No examples yet...

1.1.4.3.3 - Export to External Project Table

Export data from a project table to different project's table.

Description

Export data from a project table to different project's table.

Data Sharing Management

Data Sharing Management

In order to export a table to another project you must first go to both projects Home Tab and allow the projects to share data with each other. To do this select New Data Share and select the project and give them Read access.

Export External Project Table

Export External Project Table

Read From

Select the Source Table from the drop down menu.

Write To

Target Project

Select the Target Project from the drop down menu.

Target Table Static

To establish the target table select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Target Table Dynamic

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to target table:

legal_entity/inputs/{current_month}/ledger_values

Append to Existing Data

To append the data from the source table to the target table select the Append to Existing Data check box.

1.1.4.3.4 - Export to Google Spreadsheet

Export an Analyze data table to Google Drive as a Google Spreadsheet

Description

Export an Analyze data table to Google Drive as a Google Spreadsheet. A valid Google account is required to use this transform. User credentials must be set up in PlaidCloud Tools prior to using the transform.

Export Parameters

Source and Target

Select the Source Table from PlaidCloud Document using the dropdown menu.

Next, specify the Target Connection information. For details on setting up a Google Docs account connection, see here: PlaidCloud Tools – Connection. Once all necessary accounts have been set up, select the appropriate account from the dropdown list.

Finally, provide the Target Spreadsheet Name and Target Worksheet Name. If desired, select the Append data to existing Worksheet data checkbox to append data to an existing Worksheet. If the target worksheet does not yet exist, it will be created.

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

For more aggregation details, see the Analyze overview page here.

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

No examples yet...

1.1.4.3.5 - Export to HDF

Export an Analyze data table to PlaidCloud Document as an HDF5 file

Description

Export an Analyze data table to PlaidCloud Document as an HDF5 file.

For more details on HDF5 files, see the HDF Group’s official website here: http://www.hdfgroup.org/HDF5/.

Export Parameters

Export File Selector

Export File Selector

The file selector in this transform allows you to choose a destination store the exported result in a PlaidCloud Document.

You choose a directory and specify a file name for the target file.

Source Table

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to source table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the source for the export, leave the Dynamic box unchecked and select the source table.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory next selection.

Target Directory Path

Select the Browse icon to the right of the Target Directory Path and navigate to the location you want the file saved to.

Target File Name

Specify the name the exported file should be saved as.

Output File Type

All exported files are uncompressed, but the following compression options are available:

  • Zip
  • GZip
  • BZip2

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

For more aggregation details, see the Analyze overview page here.

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

No examples yet...

1.1.4.3.6 - Export to HTML

Export an Analyze data table to PlaidCloud Document as an HTML file

Description

Export an Analyze data table to PlaidCloud Document as an HTML file. The resultant HTML file will simply contain a table.

Export Parameters

Export File Selector

Export File Selector

The file selector in this transform allows you to choose a destination store the exported result in a PlaidCloud Document.

You choose a directory and specify a file name for the target file.

Source Table

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to source table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the source for the export, leave the Dynamic box unchecked and select the source table.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory next selection.

Target Directory Path

Select the Browse icon to the right of the Target Directory Path and navigate to the location you want the file saved to.

Target File Name

Specify the name the exported file should be saved as.

Export HTML

Bold Rows

Select this checkbox to make the first row (header row) bold font.

Escape

This option is enabled by default. When the checkbox is selected, the export process will convert the characters <, >, and & to HTML-safe sequences.

Double Precision

See details here:

Output File Type

All exported files are uncompressed, but the following compression options are available:

  • Zip
  • GZip
  • BZip2

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

For more aggregation details, see the Analyze overview page here.

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

No examples yet...

1.1.4.3.7 - Export to JSON

Export an Analyze data table to PlaidCloud Document as a JSON file

Description

Export an Analyze data table to PlaidCloud Document as a JSON file. There are several options (shown below) for data orientation.

For more details on JSON files, see the JSON official website here: http://json.org/.

Export Parameters

Export File Selector

Export File Selector

The file selector in this transform allows you to choose a destination store the exported result in a PlaidCloud Document.

You choose a directory and specify a file name for the target file.

Source Table

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to source table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the source for the export, leave the Dynamic box unchecked and select the source table.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory next selection.

Target Directory Path

Select the Browse icon to the right of the Target Directory Path and navigate to the location you want the file saved to.

Target File Name

Specify the name the exported file should be saved as.

JSON Orientation

Consider the following data set:

IDNameGenderState
1JackMMO
2JillFMO
3GeorgeMVA
4AbeMKY

JSON files can be exported into one of four data formats:

  • Records: Data is stored in Python dictionary sets, with each row stored in {Column -> Value, …} format. For example: [{“ID”:1,”Name”:”Jack”,”Gender”:”M”,”State”:”MO”},{“ID”:2,”Name”:”Jill”,”Gender”:”F”,”State”:”MO”},{“ID”:3,”Name”:”George”,”Gender”:”M”,”State”:”VA”},{“ID”:4,”Name”:”Abe”,”Gender”:”M”,”State”:”KY”}]
  • Index: Data is stored in nested Python dictionary sets, with each row stored in {Index -> {Column -> Value, …},…} format. For example: {“0”:{“ID”:1,”Name”:”Jack”,”Gender”:”M”,”State”:”MO”},”1”:{“ID”:2,”Name”:”Jill”,”Gender”:”F”,”State”:”MO”},”2”:{“ID”:3,”Name”:”George”,”Gender”:”M”,”State”:”VA”},”3”:{“ID”:4,”Name”:”Abe”,”Gender”:”M”,”State”:”KY”}}
  • Split: Data is stored in a single Python dictionary set, values are stored in lists. For example: {“columns”:[“ID”,”Name”,”Gender”,”State”],”index”:[0,1,2,3],”data”:[[1,”Jack”,”M”,”MO”],[2,”Jill”,”F”,”MO”],[3,”George”,”M”,”VA”],[4,”Abe”,”M”,”KY”]]}
  • Values: Data is stored in multiple Python lists. For example: [[1,”Jack”,”M”,”MO”],[2,”Jill”,”F”,”MO”],[3,”George”,”M”,”VA”],[4,”Abe”,”M”,”KY”]]

Date Handling

Specify Date Format using the dropdown menu. Choose from the following formats:

  • Epoch (Unix Timestamp – Seconds since 1/1/1970)
  • ISO 8601 Format (YYYY-MM-DD HH:MM:SS with timeproject offset)

Specify Date Unit using the dropdown menu. Choose from the following formats, listed in order of increasing precision:

  • Seconds (s)
  • Milliseconds (ms)
  • Microseconds (us)
  • Nanoseconds (ns)

Force ASCII

Select this checkbox to ensure that all strings are encoded in proper ASCII format. This is enabled by default.

Output File Type

All exported files are uncompressed, but the following compression options are available:

  • Zip
  • GZip
  • BZip2

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

For more aggregation details, see the Analyze overview page here.

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

No examples yet...

1.1.4.3.8 - Export to Quandl

Export an Analyze data table to Quandl’s database

Description

Export an Analyze data table to Quandl’s database.

Source and Target

Specify the following parameters:

  • Source Table: Analyze data table to export
  • Quandl Connection: Accessing Quandl data sets requires a user account or a guest account with limited access. This requires set up in Tools. For details on setting up a Quandl account connection, see here: PlaidCloud Tools – Connection
  • Quandl Code: Use the Search button to search for data sets. Alternatively, data sets can be entered manually. This requires the user to enter the portion of the URL after “http://www.quandl.com”. For example, to import the data set for Microsoft stock, which can be found here (http://www.quandl.com/GOOG/NASDAQ_MSFT), enter GOOG/NASDAQ_MSFT in the Quandl Code field
  • Dataset Name: Name of the dataset to be exported to Quandl
  • Dataset Description: Description of dataset to be exported to Quandl

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

For more aggregation details, see the Analyze overview page here.

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

No examples yet...

1.1.4.3.9 - Export to SQL

Export an Analyze data table to PlaidCloud Document as an SQL

Description

Export an Analyze data table to PlaidCloud Document as an SQL.

Examples

No examples yet...

1.1.4.3.10 - Export to Table Archive

Exports PlaidCloud table archive file

Description

Exports PlaidCloud table archive file.

Export Parameters

Export File Selector

Export File Selector

The file selector in this transform allows you to choose a destination store the exported result in a PlaidCloud Document.

You choose a directory and specify a file name for the target file.

Source Table

Dynamic Option

The Dynamic option allows specification of a table using text, including variables. This is useful when employing variable driven workflows where table and view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to source table:

legal_entity/inputs/{current_month}/ledger_values

Static Option

When a specific table is desired as the source for the export, leave the Dynamic box unchecked and select the source table.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Selecting a Document Account

Choose a PlaidCloud Document account for which you have access. This will provide you with the ability to select a directory next selection.

Target Directory Path

Select the Browse icon to the right of the Target Directory Path and navigate to the location you want the file saved to.

Target File Name

Specify the name the exported file should be saved as.

Examples

No examples yet...

1.1.4.3.11 - Export to XML

Export an Analyze data table to PlaidCloud Document as an XML file.

Description

Export an Analyze data table to PlaidCloud Document as an XML file.

1.1.4.4 - Table Steps

1.1.4.4.1 - Table Anti Join

This function provides an unmatched set of data between two tables

Description

Table Anti Join provides the unmatched set of items between two tables. This will return the list of items in the first table without matches in the second table. This can be quite useful for determining which records are present in one table but not another.

This operation could be accomplished by using outer joins and filtering on null values for the join; however, the Anti Join transform will perform this in a more efficient and obvious way.

Table Data Selection

Table Source

Specify the source data table by selecting it from the dropdown menu.

Source Columns

Specify any columns to be included here. Selecting the Inspect Source and Populate Source Mapping Table buttons will make these columns available for the join operation.

Select Subset of Source Data

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.

Table Source

Table Output

Target Table

Table Target

To establish the target table select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Join Map

Table Join Map

Specify join conditions. Using the Guess button will find all matching columns from both Table 1 as well as Table 2. To add additional columns manually, right click anywhere in the section and select either Insert Row or Append Row, to add a row prior to the currently selected row or to add a row at the end, respectively. Then, type the column names to match from Table 1 to Table 2. To remove a field from the Join Map, simply right-click and select Delete.

Target Output Columns

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Output Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.4.2 - Table Append

Used append data to an existing table.

Description

Used append data to an existing table.

Load Parameters

Source and Target

Source And Target

To establish the source and target tables, first select the data table to be extracted from using the Source Table dropdown menu. Next, select an existing table as the target table using the Target Table dropdown.

Table Data Selection

When configuring the Data Mapper the columns in the source table must be mapped to a column in the target table.

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

1.1.4.4.3 - Table Clear

Clear the contents of an existing data table without deleting the actual data table

Description

Clear the contents of an existing data table without deleting the actual data table. The end result is a data table with 0 rows.

Table Selection

There are two options for selecting the table or in the second option tables to:

The first option is to use the Specific Table dropdown to select the table.

The second is to use the Tables Matching Search option in which you specify the Search Path and Search Text to select the table or tables that match the search criteria. This option is very useful if you have a workflow that creates a series of commonly named tables that that have been saved appending the date.

Table Dymanic Selection

1.1.4.4.4 - Table Copy

Create a copy of a data table

Description

Create a copy of a data table.

Source and Target

Source And Target

To establish the source and target tables, first select the data table to be extracted from using the Source Table dropdown menu. Next, select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

When performing the copy, Analyze will first check to see if the target data table already exists. If it does, no action will be performed unless the Allow Overwriting Existing Table checkbox is selected. If this is the case, the target table will be overwritten.

Examples

1.1.4.4.5 - Table Cross Join

Use this function to perform an cross join between two data tables

Description

Use, as you might have expected, to perform a cross join operation on 2 data tables, combining them into a single data table without join key(s).

For more details on cross join methodology, see here: Wikipedia SQL Cross Join

Table Data Selection

Table Source

Specify the source data table by selecting it from the dropdown menu.

Source Columns

Specify any columns to be included here. Selecting the Inspect Source and Populate Source Mapping Table buttons will make these columns available for the join operation.

Select Subset of Source Data

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.

Table Source

Table Output

Target Table

Table Target

To establish the target table select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Target Output Columns

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Output Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.4.6 - Table Drop

Drop/Delete a data table

Description

Drop/delete a data table.

Table Selection

Table Selection

There are two options for selecting the table or in the second option tables to:

The first option is to use the Specific Table dropdown to select the table.

The second is to use the Tables Matching Search option in which you specify the Search Path and Search Text to select the table or tables that match the search criteria. This option is very useful if you have a workflow that creates a series of commonly named tables that that have been saved appending the date.

Table Dymanic Selection

1.1.4.4.7 - Table Extract

This function helps to extract data from one table and place it in another

Description

Used to extract data from an existing Analyze data table into another data table. Examples include, but are not limited to, the following:

  • Sort
  • Group
  • Summarization
  • Filter/Subset Rows
  • Drop Extra Columns
  • Math Operations
  • String Operations

Extract Parameters

Source and Target

Source And Target

To establish the source and target tables, first select the data table to be extracted from using the Source Table dropdown menu. Next, select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Table Data Selection

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

1.1.4.4.8 - Table Faker

This function generates fake data

Description

Table Faker generates fake data.

Address

| Generator | Optional Arguments | | Building Number | | | City | | | City Suffix | | | Country | | | Country Code | “representation”=”alpha-2” | | Full Address | | | Latitude | | | Longitude | | | Military DPO | | | Postal Code | | | Postal Code Plus 4 | | | State | | | State Abbreviation | | | Street Address | | | Street Name | | | Street Suffix | |

Automotive

| Generator | Optional Arguments | | License Plate | |

Barcode

| Generator | Optional Arguments | | EAN13 | | | EAN8 | |

Colors

| Generator | Optional Arguments | | Color Name | | | Hex Color | | | RGB Color | | | RGB CSS Color | | | Safe Color Name | | | Safe Hex Color | |

Company

| Generator | Optional Arguments | | Company Catch Phrase | | | Company Name | | | Company Suffix | |

Credit Card

| Generator | Optional Arguments | | Expriration Date | “start”=”now”“end”=”+10y”## ‘12/20’ | | Full | “card_type”=null | | Number | “card_type”=null | | Provider | “card_type”=null | | Security Code | “card_type”=null |

Currency

| Generator | Optional Arguments | | Code | |

Date Time

| Generator | Optional Arguments | | AM/PM | | | Century | | | Date | “pattern”:”%Y-%m-%d”“end_datetime”:null | | Date Time | “tzinfo”:null“end_datetime”=null | | Date Time this Century | “before_now”=true“after_now”=false“tzinfo”=null | | Date Time this Decade | “before_now”=true“after_now”=false“tzinfo”=null | | Date Time this Month | “before_now”=true“after_now”=false“tzinfo”=null | | Date Time this Year | “before_now”=true“after_now”=false“tzinfo”=null | | Day of Month | | | Day of Week | | | ISO8601 Date Time | “tzinfo”=null“end_datetime”=null | | Month | | | Month Name | | | Past Date (Last 30 Days) | “start_date”=”-30d”“tzinfo”=null | | Timezone | | | Unix Time | “end_datetime”=null“start_datetime”=null | | Year | |

File

| Generator | Optional Arguments | | File Extension | “category”=null | | File Name | “category”=null“extension”=null | | File Path | “depth”=”1”“category”=null“extension”=null | | Mime Type | “category”=null |

Internet

| Generator | Optional Arguments | | Company Email | | | Domain Name | | | Domain Word | | | Email | | | Free Email | | | Free Email Domain | | | Image URL | “width”=null“height”=null | | IPv4 | “network”=false“address_class”=”no”“private”=null | | IPv6 | “network”=false | | MAC Address | | | Safe Email | | | Slug | | | TLD | | | URI | | | URL | “schemes”=null | | URL Extension | | | URL Page | | | User Name | |

ISBN

| Generator | Optional Arguments | | ISBN10 | “eparator”=”-“ | | ISBN13 | “eparator”=”-“ |

Job

| Generator | Optional Arguments | | Job Name | |

Lorem

| Generator | Optional Arguments | | Paragraph | “nb_sentences”=”3”“variable_nb_sentences”=true“ext_word_list”=null | | Paragraphs | “nb”=”3”“ext_word_list”=null | | Sentence | “nb_words”=”6”“variable_nb_words”=true“ext_word_list”=null | | Sentences | “nb”=”3”“ext_word_list”=null | | Text | “max_nb_chars”=”200”“ext_word_list”=null | | Word | “ext_word_list”=null | | Words | “nb”=”3”“ext_word_list”=null |

Misc

| Generator | Optional Arguments | | Binary | “length”=”1048576” | | Boolean | “chance_of_getting_true”=”50” | | Null Boolean | | | Locale | | | Language Code | | | MD5 | “raw_output”=false | | Password | “length”=”10”“special_chars”=true“digits”=true“upper_case”=true“lower_case”=true | | Random String | | | SHA1 | “raw_output”=false | | SHA256 | “raw_output”=false | | UUID4 | |

Numeric

| Generator | Optional Arguments | | Big Serial (Auto Increment) | | | Random Float | | | Random Float in Range | | | Random Integer | | | Random Integer in Range | | | Random Numeric | | | Random Percentage (0 – 1) | | | Random Percentage (0 – 100) | | | Serial (Auto Increment) | |

Person

| Generator | Optional Arguments | | First Name | | | First Name Female | | | First Name Male | | | Full Name | | | Full Name Female | | | Full Name Male | | | Last Name | | | Last Name Female | | | Last Name Male | | | Prefix | | | Prefix Female | | | Prefix Male | | | Suffix | | | Suffix Female | | | Suffix Male | |

Phone

| Generator | Optional Arguments | | Phone Number | | | ISDN | |

Tax

| Generator | Optional Arguments | | EIN | | | Full SSN | | | ITIN | |

User Agent

| Generator | Optional Arguments | | Chrome | “version_from”=”13”“version_to”=”63”“build_from”=”800”“build_to”=”899” | | Firefox | | | Full User Agent | | | Internet Explorer | | | Linux Platform Token | | | Linux Processor | | | Mac Platform Token | | | Mac Processor | | | Opera | | | Safari | | | Windows Platform Token | |

Special Generators

While these two generators do not have arguments, the options they provide act similarly to arguments.

Pattern Generator:

| Number | Format | Output | Description | | 3.1415926 | {:.2f} | 3.14 | 2 decimal places | | 3.1415926 | {:+.2f} | +3.14 | 2 decimal places with sign | | -1 | {:+.2f} | -1.00 | 2 decimal places with sign | | 2.71828 | {:.0f} | 3 | No decimal places | | 5 | {:0>2d} | 05 | Pad number with zeros (left padding, width 2) | | 5 | {:x<4d} | 5xxx | Pad Number with x’s (right padding, width 4) | | 10 | {:x<4d} | 10xx | Pad number with x’s (right padding, width 4) | | 1000000 | {:,} | 1,000,000 | Number format with comma separator | | 0.25 | {:.2%} | 25.00% | Format percentage | | 1000000000 | {:.2e} | 1.00e+09 | Exponent notation | | 13 | {:10d} | 13 | Right aligned (default, width 10) | | 13 | {:<10d} | 13 | Left aligned (width 10) | | 13 | {:^10d} | 13 | Center aligned (width 10) |

Random Choice:

In order to provide the options for random choice, simply put your options in quotes and seperate each option with a comma. So a string of random choice options would appear like this: “x”,”y”,”z”

Here, the “Key Word Args/Pattern/Choices” column of the “pattern” row contains a sentence with several references. The first reference equation ( {percentage0-100:.2f}% ) points to the “percentage0-100” row which will generate a random equation. Therefore, the random percentage produced by the “percentage0-100” row will be automatically inserted into the sentence. The reference equation {first_name} points to the row titled “first_name” which will randomly generate a first name, and this name will be automatically inserted into the sentence. The last reference equation ( {randomn_choice} ) operates the same as the other two.

With this, when the pattern generator is run, you will recieve the following results.

1.1.4.4.9 - Table In-Place Delete

Performs a delete on the table using the specified filter conditions

Description

Performs a delete on the table using the specified filter conditions. The operation is performed on the designated table directly so no additional tables are created. Only the rows that meet the filter criteria are deleted. This may be an effective approach when encountering concerns related to data size.

Delete Parameters

Select the Source table for deleting from the dropdown list. This list includes all Project and Workflow data tables.

Table In-Place Delete

Data Filters for Delete

Table In-Place Delete

Examples

1.1.4.4.10 - Table In-Place Update

Performs an update on the table using the specified filter conditions and value settings

Description

Performs an update on the table using the specified filter conditions and value settings. The operation is performed directly on the designated table, so no additional tables are created. This may be an effective approach when concerns of data size are encountered.

Table Selection

Select the Source table for updating from the dropdown list. This list includes all Project and Workflow data tables.

Examples

In this example the Account will be set to 41000 when the Version is equal to "Actual" in "Ledger Value to be allocated".

Table In-Place Update

Table In-Place Update

1.1.4.4.11 - Table Inner Join

Use this function to perform an inner join between two data tables

Description

Use, as you might have expected, to perform an inner join operation on 2 data tables, combining them into a single data table based upon the specified join key(s).

For more details on inner join methodology, see here: Wikipedia SQL Inner Join

Table Data Selection

Table Source

Specify the source data table by selecting it from the dropdown menu.

Source Columns

Specify any columns to be included here. Selecting the Inspect Source and Populate Source Mapping Table buttons will make these columns available for the join operation.

Select Subset of Source Data

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.

Table Source

Table Output

Target Table

Table Target

To establish the target table select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Join Map

Table Join Map

Specify join conditions. Using the Guess button will find all matching columns from both Table 1 as well as Table 2. To add additional columns manually, right click anywhere in the section and select either Insert Row or Append Row, to add a row prior to the currently selected row or to add a row at the end, respectively. Then, type the column names to match from Table 1 to Table 2. To remove a field from the Join Map, simply right-click and select Delete.

Target Output Columns

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Output Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

Join Automobile Manufacturers with Models

In this example, consider the following source data tables. First is a list of automobile manufacturers.

Mfg_IDManufacturer
1Aston Martin
2Porsche
3Lamborghini
4Ferrari
5Koenigsegg

Next is a list of automobile models with a manufacturer ID. Note that there are several models with no manufacturer.

ModelNameMfg_ID
Aventador3
Countach3
DBS1
Enzo4
One-771
Optimus Prime
Batmobile
Agera5
Lightning McQueen

To get a list of models by manufacturer, it makes sense to join on Mfg_ID.

First, specify parameters for Table 1 Data Selection. The source data table is selected and all columns are listed.

Next, specify parameters for Table 2 Data Selection. Once again, the source data table is selected and all columns are listed.

Finally, the join conditions are set in the Table Output tab. Using the Guess button, Analyze properly identifies the Mfg_ID column to use as the Join Key. Lastly, the

Target Output Columns are specified automatically using the Propagate button. This effectively includes all columns from all tables, with all join columns included only a single time. Note that the columns are sorted alphabetically, first by Manufacturer and next by ModelName.

As expected, the final output only includes values which had a match in both tables. As such, Porsche does not show up because it had no models. Likewise, the

Batmobile had no manufacturer (it was a custom job), so it’s not included.

1.1.4.4.12 - Table Lookup

Similar to Microsoft Excel, this workflow function also increases process performance

Description

If you are a regular user of the vlookup function in Microsoft Excel, the Table Lookup transform should feel very familiar. It’s used to perform essentially the same function. Unlike the Microsoft Excel version, the PlaidCloud Analyze Table Lookup transform offers greater flexibility, especially allowing for matching on and returning multiple columns.

Table Data Selection

Table Source

Specify the source data table by selecting it from the dropdown menu.

Source Columns

Specify any columns to be included here. Selecting the Inspect Source and Populate Source Mapping Table buttons will make these columns available for the join operation.

Select Subset of Source Data

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.

Table Source

Table Output

Target Table

Table Target

To establish the target table select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Join Map

Table Join Map

Specify join conditions. Using the Guess button will find all matching columns from both Table 1 as well as Table 2. To add additional columns manually, right click anywhere in the section and select either Insert Row or Append Row, to add a row prior to the currently selected row or to add a row at the end, respectively. Then, type the column names to match from Table 1 to Table 2. To remove a field from the Join Map, simply right-click and select Delete.

Target Output Columns

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Output Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

Lookup Product Dimension Information

In this example, the modeler needs information from the product dimension table to make sense of the order fact table. As such, the Import Order Fact table is selected as the Source Table. The Import Product Dim table contains the desired lookup information, so it’s selected as the Lookup Table Source. Although available, no filters are applied to the lookup data table (nor any other data tables, for that matter).

In the Table Data Selection section, all columns are mapped from the source data table to the target data table.

No Data Filters are applied to either source or target data.

Lastly, the source data table is matched to the lookup data table using the Product_ID field found in each table. Only the Product_Description and Unit_Cost columns are appended to the target data table, with Unit_Cost being renamed to Retail_Unit_Cost in the process.

In the resulting target data table, the Product_Description and Retail_Unit_Cost columns have been added, based on matching values in the Product_ID column.

1.1.4.4.13 - Table Melt

Flip columns to rows

Description

Used to convert short, wide data tables into long, narrow data tables. Selected columns are transposed, with the column names converted into values across multiple rows.

Perhaps the easiest example to understand is to think of a data table with months listed as column headers:

Table Melt Input

Melting this data table would convert all of the month columns into rows.

Table Melt Output

By specifying which columns to transpose and which columns to leave alone, this becomes a powerful tool. Making this conversion in other ETL tools could require a dozen more steps.

Source and Target Parameters

Table Melt Source Target

Source and Target

To establish the source and target, first select the data table to be extracted from the Source Table dropdown menu.

Target Table

Table Target

To establish the target table select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Pre-Melt Table Data Selection

Table Pre-Melt

This section is a bit different from the standard Table Data Selection. Basically this is used to specify which columns are to be used in the Melt operation. This includes ID columns and Variable/Value columns.

For more details regarding Table Data Selection, see details here: Table Data Selection

Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset of Source Data

Any valid Python expression is acceptable to subset the data. Please see Expressions

for more details and examples.

Apply Secondary Filter To Result Data

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples

Final Data Table Slicing (Limit)

To limit the data, simply check the Apply Row Slicer box and then specify the following:

  • Initial Rows to Skip: Rows of data to skip (column header row is not included in count)
  • End at Row: Last row of data to include. This is different from simply counting rows at the end to drop

Melt Layout

Table Melt Layout

There is a Guess Layout button available to allow Analyze a first crack at specifying ID columns. By default, all text (data type of String) columns are placed in the Keys section. Numeric columns are not placed into Keys by default, but they are allowed to be there based on the model’s needs.

Columns to Use as IDs (Keys)

ID columns are the columns which remain in tact. These columns are effectively repeated for every instance of a variable/value combination. For a monthly table, this would result in 12 repetitions of ID columns.

ID columns can be added automatically or manually. To add the columns automatically, use the aforementioned Guess Layout button. To add additional columns manually, right click anywhere in the section and select either Insert Row or Append Row, to add a row prior to the currently selected row or to add a row at the end, respectively. Then, type the column name to use as an ID.

To remove a field from the IDs, simply right-click and select Delete.

Melt Result Column Naming

There are 2 values to specify. Both of these values will become column names in the target data table.

  • Variable Column Name: As specified in the transform, The variable names are derived from the current source column names. Essentially, specify a column name which will represent the data originally represented in the source data table columns.
  • Value Column Name: Specify a column name to represent the data represented within the source data table. Typically this will be a numerical unit: Dollars, Pounds, Degrees, Percent, etc.

Examples

In the abouve documentation.

1.1.4.4.14 - Table Outer Join

Combine data tables using specified join key(s)

Description

Use, as you might have expected, to perform a full outer join operation on 2 data tables, combining them into a single data table based upon the join key(s) specified.

For more details on outer join methodology, see here: Wikipedia SQL Full Outer Join

Table Data Selection

Table Source

Specify the source data table by selecting it from the dropdown menu.

Source Columns

Specify any columns to be included here. Selecting the Inspect Source and Populate Source Mapping Table buttons will make these columns available for the join operation.

Select Subset of Source Data

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.

Table Source

Table Output

Target Table

Table Target

To establish the target table select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Join Map

Table Join Map

Specify join conditions. Using the Guess button will find all matching columns from both Table 1 as well as Table 2. To add additional columns manually, right click anywhere in the section and select either Insert Row or Append Row, to add a row prior to the currently selected row or to add a row at the end, respectively. Then, type the column names to match from Table 1 to Table 2. To remove a field from the Join Map, simply right-click and select Delete.

Target Output Columns

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Output Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

Join Automobile Manufacturers with Models

In this example, consider the following source data tables. First is a list of automobile manufacturers.

Mfg_IDManufacturer
1Aston Martin
2Porsche
3Lamborghini
4Ferrari
5Koenigsegg

Next is a list of automobile models with a manufacturer ID. Note that there are several models with no manufacturer.

ModelNameMfg_ID
Aventador3
Countach3
DBS1
Enzo4
One-771
Optimus Prime
Batmobile
Agera5
Lightning McQueen

To get a list of models by manufacturer, it makes sense to join on Mfg_ID. By leveraging outer join concepts, the output will also be able to show those items which do not have any matches.

First, specify parameters for Table 1 Data Selection. The source data table is selected and all columns are listed.

Next, specify parameters for Table 2 Data Selection. Once again, the source data table is selected and all columns are listed.

Finally, the join conditions are set in the Table Output tab. Using the Guess button, Analyze properly identifies the Mfg_ID column to use as the Join Key. Lastly, the

Target Output Columns are specified automatically using the Propagate button. This effectively includes all columns from all tables, with any join columns obviously only being included a single time. Note that the columns are sorted alphabetically, first by Manufacturer and next by ModelName.

As expected, the final output includes all rows from both tables, whether they had a match in both tables or not. As such, this time Porsche does indeed show up despite having no models. Additionally, Batmobile, Lightning McQueen, and Optimus Prime are included in the results even though none of them have a manufacturer. Besides, who can say ‘No’ to them?

1.1.4.4.15 - Table Pivot

Flip rows to columns

Description

Used to convert long, narrow data tables into short, wide data tables. Selected columns are transposed, with the column names converted into values across multiple columns.

Perhaps the easiest example to understand is to think of a data table with months listed as rows:

Table Pivot Input

Pivoting this data table would convert all of the month rows into columns.

Table Pivot Output

By specifying which columns to transpose and which columns to leave alone, this becomes a powerful tool. Making this conversion in other ETL tools could require a dozen more steps.

Source and Target Parameters

Table Pivot Source Target

Source Table Selection

To establish the source and target, first select the data table to be extracted from using the dropdown menu.

Traget Table Selection

Target Table

Table Target

To establish the target table select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Pivot Column Selection

The Category Column to Transform into Column Headers is where you specigy the column in Source Table that will be pivoted to rows. The Value Column ti Pivot to Column Vales is the column that containes the values in the Source Table. The Value Aggregation Option is where you specify how you want the data to aggregate.

Table Data Selection

Table Pivot Data Selection

The Table Data Selection tab is used to map columns from the source data table to the target data table. All source columns on the left side of the window are automatically mapped to the target data table depicted on the right side of the window. Using the Inspect Source menu button, there are a few additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

In addition to each of these options, each choice offers the ability to preview the source data.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

To rearrange columns in the target data table, select the desired column(s), then right click and select Move to Top, Move Up, Move Down, or Move to Bottom.

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return distinct results only.

To aggregate results, select the Summarize menu option. This will toggle a set of drop down boxes for each column in the target data table. The following summarization options are available:

  • Group by (set as default)
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Mean
  • Median
  • Mode
  • Std Dev
  • Variance
  • Product
  • Absolute Val
  • Quantile
  • Skew
  • Kurtosis
  • Mean Abs Dev
  • Cumulative Sum
  • Cumulative Min
  • Cumulative Max
  • Cumulative Product

For more aggregation details, see the Analyze overview page here.

Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset of Data

Any valid Python expression is acceptable to subset the data. Please see Expressions

for more details and examples.

Apply Secondary Filter To Result Data

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples

Final Data Table Slicing (Limit)

To limit the data, simply check the Apply Row Slicer box and then specify the following:

  • Initial Rows to Skip: Rows of data to skip (column header row is not included in count)
  • End at Row: Last row of data to include. This is different from simply counting rows at the end to drop

1.1.4.4.16 - Table Union All

Access history to all created workflow data tables

Description

Use to combine multiple data tables with the same column structure into a single data table. For example, time series data is a prime candidate for this transform. The result is all of the records from the combined tables.

Sources

The Sources section serves as a collection of all data tables to append together. Typically, all of the data tables will have the same (or similar) column structure. There are two buttons available to add a data table to the list:

  • Insert Row
  • Append Row

Additionally, right-clicking in the Select Source to Edit window will display the same options. Right-clicking on a table already added will also display the Delete option.

To execute the transform properly, there will need to be one entry in the Sources section for every source data table to append together. These entries are listed in the order in which they will be appended. To adjust the order, right-clicking on a table will display the following options:

  • Move Down (if applicable)
  • Move To Bottom (if applicable)
  • Move Up (if applicable)
  • Move To Top (if applicable)

By default, each source is named New Table, but the modeler is encouraged to provide descriptive names by double-clicking the name and renaming accordingly.

Target Table

By default, the Target Table is left blank. Before naming, note that data tables must follow Linux naming conventions. As such, we recommend that names only consist of alphanumeric characters. Analyze will automatically scrub any invalid characters from the name. Additionally, it will limit the length to 256 characters, so be concise!

Target Table

Table Target

To establish the target table select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Table Data Selection Tab

Source Table

Table Selection

There are two options for selecting the table or in the second option tables to:

The first option is to use the Specific Table dropdown to select the table.

The second is to use the Tables Matching Search option in which you specify the Search Path and Search Text to select the table or tables that match the search criteria. This option is very useful if you have a workflow that creates a series of commonly named tables that that have been saved appending the date.

Table Dymanic Selection

Source Columns

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.4.17 - Table Union Distinct

Consolidate data tables

Description

Use to combine multiple data tables with the same column structure into a single data table. For example, time series data is a prime candidate for this transform. The result is always the distinct set of records after combining the data.

Sources

The Sources section serves as a collection of all data tables to append together. Typically, all of the data tables will have the same (or similar) column structure. There are two buttons available to add a data table to the list:

  • Insert Row
  • Append Row

Additionally, right-clicking in the Select Source to Edit window will display the same options. Right-clicking on a table already added will also display the Delete option.

To execute the transform properly, there will need to be one entry in the Sources section for every source data table to append together. These entries are listed in the order in which they will be appended. To adjust the order, right-clicking on a table will display the following options:

  • Move Down (if applicable)
  • Move To Bottom (if applicable)
  • Move Up (if applicable)
  • Move To Top (if applicable)

By default, each source is named New Table, but the modeler is encouraged to provide descriptive names by double-clicking the name and renaming accordingly.

Target Table

By default, the Target Table is left blank. Before naming, note that data tables must follow Linux naming conventions. As such, we recommend that names only consist of alphanumeric characters. Analyze will automatically scrub any invalid characters from the name. Additionally, it will limit the length to 256 characters, so be concise!

Target Table

Table Target

To establish the target table select either an existing table as the target table using the Target Table dropdown or click on the green "+" sign to create a new table as the target.

Table Creation

When creating a new table you will have the option to either create it as a View or as a Table.

Views:

Views are useful in that the time required for a step to execute is significantly less than when a table is used. The downside of views is they are not a useful for data exploration in the table Details mode.

Tables:

When using a table as the target a step will take longer to execute but data exploration in the Details mode is much quicker than with a view.

Table Data Selection Tab

Source Table

Table Selection

There are two options for selecting the table or in the second option tables to:

The first option is to use the Specific Table dropdown to select the table.

The second is to use the Tables Matching Search option in which you specify the Search Path and Search Text to select the table or tables that match the search criteria. This option is very useful if you have a workflow that creates a series of commonly named tables that that have been saved appending the date.

Table Dymanic Selection

Source Columns

Data Mapper Configuration

Table Data Mapper

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.4.18 - Table Upsert

Perform an update of existing records or append new ones

Description

Performs an update of existing records and append new ones.

Upsert Parameters

Source And Target

To establish the source and target tables, first select the data table to be extracted from using the Source Table dropdown menu. Next, select an existing table as the target table using the Target Table dropdown.

Source Table Data Selection

Table Upsert

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Update Key

In order for the Upsert to update the existing and append new records you need to select the columns in the data that create a unique key.

Source Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.5 - Dimension Steps

1.1.4.5.1 - Dimension Clear

Clears the contents of a dimension including structure, values, aliases, properties, and alternate hierarchies

Description

Clears the contents of a dimension including structure, values, aliases, properties, and alternate hierarchies

Dimension Clear

Dimension Selection

Specify Dimension Dynamically

If dimensions or paths were created dynamically then same variables can be used to clear them. Using variables in the clear process is useful since it eliminates the need to update the Dimension Clear step manually on a periodic basis.

An example that uses the current_month variable to dynamically clear the Materials dimension:

/Dimensions/{current_month}/Products/Materials

Use Specific Dimension

Use the dropdown menu to select a specific dimension to clear.

1.1.4.5.2 - Dimension Create

Creates a dimension for use and loading

Description

Creates a dimension for use and loading

Dimension Create

Dimension To Create

Name

You can either use a specific name for the dimension to be created or include variables for dynamic naming.

Variables are useful when dimensions are updated on a periodic basis and retaining the historical view is desired.

An example that uses the current_month variable to dynamically name the dimension:

dimension_name_{current_month}

Path

Paths let you create folder structures that the dimensions are are stored in. You can use variables here as well to make the folder structure dynamic. An example that uses the current_month variable to dynamically name a folder:

/Dimensions/{current_month}/Product/

Memo

The Memo field is used a place to store comments or notes.

1.1.4.5.3 - Dimension Delete

Deletes a dimension along with all associated structure, values, properties, aliases, and alternate hierarchies

Description

Deletes a dimension along with all associated structure, values, properties, aliases, and alternate hierarchies

Dimension Clear

Dimension Selection

Specify Dimension Dynamically

If dimensions or paths were created dynamically then same variables can be used to delete them. Using variables in the delete process is useful since it eliminates the need to update the Dimension Delete step manually on a periodic basis.

An example that uses the current_month variable to dynamically delete the Materials dimension:

/Dimensions/{current_month}/Products/Materials

Use Specific Dimension

Use the dropdown menu to select a specific dimension to delete.

1.1.4.5.4 - Dimension Load

Load and update dimensions using data

Description

Load and update dimensions using data from PlaidCloud tables.

Dimension Load

Dimension Selection

Specify Dimension Dynamically

To specify a dimension dynamically you include project and or local variables in the name.

Variables are useful when dimensions are updated on a periodic basis and retaining the historical view is desired.

An example that uses the current_month variable to dynamically load the dimension:

dimension_name_{current_month}

Use Specific Dimension

To use a specific dimension select the dimension using the drop down menu.

Load to Alternate Hierarchy

To load an Alternate Hierarchy fist select the dimension either dynamically or specifically, click the Load to Alternate Hierarchy checkbox and enter the name of the alternate hierarchy to be loaded.

Source Table

Dynamic

To specify the source table dynamically click the Dynamic Checkbox and enter the table name including the project and or local variables in the name.

Static

To use a specific source table select the table using the drop down menu.

Dimension Properties And Table Layout

Default Consolidation Type

There are three options for consolidation types:

  • "+": Aggregates values in the dimension.
  • "-": Subtracts values in the dimension.
  • "~": No aggregation is performed in the dimension.

Table Column Format

There are two options for fomatting the Source Table when loading a dimension.

Parent Child

In a Parent Child table there are two columns that represent the dimensions structure, Parent and Child.

EXAMPLE PARENT CHILD

PARENTCHILDConsolidation Type
Parent AllParent 1~
Parnet AllParent 2~
Parent 1Child 1+
Parent 2Child 2+
Child 1Child 3+
Child 1Child 4+
Child 2Child 5+

Flattened Levels

In a Flattend Level table there are an infinte number of columns with each column representing a level of the dimension.

EXAMPLE FLATTENED LEVELS

Level 1Level 2Level 3Level 4
Parent AllParent 1Child 1Child 3
Parent AllParent 1Child 1Child 4
Parent AllParent 2Child 2Child 5

Column Mapping

Using the Inspect Source menu button populates the Source Column in the data mapper. Once the Source Column has been populated use the Kind drop down menu to map the Source Columns to the appropriate column type.

1.1.4.5.5 - Dimension Sort

Sort dimensions automatically

Description

Sort dimensions automatically.

Dimension Clear

Dimension Selection

Specify Dimension Dynamically

If dimensions or paths were created dynamically then same variables can be used to sort them. Using variables in the sort process is useful since it eliminates the need to update the Dimension Sort step manually on a periodic basis.

An example that uses the current_month variable to dynamically sort the Materials dimension:

/Dimensions/{current_month}/Products/Materials

Use Specific Dimension

Use the dropdown menu to select a specific dimension to sort.

1.1.4.6 - Document Steps

1.1.4.6.1 - Compress PDF

Applies a PDF compression process to shrink the PDF size

Documentation coming soon...

1.1.4.6.2 - Concatenate Files

Concatenates two or more documents together

Documentation coming soon...

1.1.4.6.3 - Convert Document Encoding

Concatenates files to form a single file.

Description

Concatenates files to form a single file.

Examples

Create a source input, select the input file and browse for the file within that location. Select the desired output location, and browse to selected the desired location for the file. Save and run.

1.1.4.6.4 - Convert Document Encoding to ASCII

Updates file encoding and converts all characters to ASCII

Description

Updates file encoding and converts all characters to ASCII. This is particularly useful if the source of information has mixed encodings or other tools don’t support certain encodings.

Examples

Select the input file and browse for the file within that location. Select the desired output location, and browse to select the desired location for the file. Save and run.

1.1.4.6.5 - Convert Document Encoding to UTF-8

Updates file encoding and converts all characters to UTF-8

Description

Updates file encoding and converts all characters to UTF-8. This is particularly useful if the information source has mixed encodings or other tools don’t support certain encodings.

Examples

Select the input file and browse for the file within that location. Select the desired output location, and browse then select the desired location for the file. Save and run.

1.1.4.6.6 - Convert Document Encoding to UTF-16

Updates file encoding and converts all characters to UTF-16

Description

Updates file encoding and converts all characters to UTF-16. This is particularly useful if the information source has mixed encodings or other tools don’t support certain encodings.

Examples

Select the input file and browse for the file within that location. Select the desired output location, and browse then select the desired location for the file. Save and run.

1.1.4.6.7 - Convert Image to PDF

Converts an image to a PDF document

Documentation coming soon...

1.1.4.6.8 - Convert PDF or Image to JPEG

Converts a PDF or other image format to JPEG image

Documentation coming soon...

1.1.4.6.9 - Copy Document Directory

Copy entire directory in PlaidCloud Document

Description

Copy an entire directory within PlaidCloud Document.

Copy Directory

First, select the appropriate account from the dropdown menu.

Next, press the Browse button to select the directory you’d like to copy.

Select Destination

First, select the appropriate account from the dropdown menu.

Next, press the Browse button to select the destination for the copied directory.

If desired, the copied directory can be given a new name. To do so, simply check the Rename the Copied Folder to: box and type in a new name.

Examples

No examples yet...

1.1.4.6.10 - Copy Document File

Copy a single file within PlaidCloud Document.

Description

Copy a single file within PlaidCloud Document.

File To Copy

First, select the appropriate account from the dropdown menu.

Next, press the Browse button to select the file you’d like to copy.

Select Destination

First, select the appropriate account from the dropdown menu.

Next, press the Browse button to select the destination for the copied file.

By default, Analyze will not allow files to be overwritten. Instead, a numerical suffix will be added to each subsequent copy.

To overwrite the existing file, simply check the Allow Overwriting Existing File box.

To rename the file, check the Rename the copied file to box and type in a new name.

Examples

No examples yet...

1.1.4.6.11 - Create Document Directory

Use PlaidCloud Document to create a new Document Directory

Description

Create a new directory within PlaidCloud Document.

Where to Create New Folder

First, select the appropriate account from the dropdown menu.

Next, press the Browse button to select the parent directory.

New Folder Name

Type the name for the new directory.

Examples

No examples yet...

1.1.4.6.12 - Crop Image to Headshot

Automatic headshot cropping of an image

Documentation coming soon...

1.1.4.6.13 - Delete Document Directory

Delete an existing directory from within PlaidCloud Document

Description

Delete an existing directory from within PlaidCloud Document.

Folder to Delete

First, select the appropriate account from the dropdown menu.

Next, press the Browse button to select the directory to delete.

Examples

No examples yet...

1.1.4.6.14 - Delete Document File

Delete an existing file from within PlaidCloud Document

Description

Delete an existing file from within PlaidCloud Document.

File to Delete

First, select the appropriate account from the dropdown menu.

Next, press the Browse button to select the file to delete.

Examples

No examples yet...

1.1.4.6.15 - Document Text Substitution

Perform text substitution within a specified file

Description

Performs text substitution in the specified file.

Examples

No examples yet...

1.1.4.6.16 - Fix File Extension

Determines the proper file extension and renames the file

Documentation coming soon...

1.1.4.6.17 - Merge Multiple PDFs

Merges multiple PDFs into a single PDF document

Documentation coming soon...

1.1.4.6.18 - Rename Document Directory

Rename an existing directory in PlaidCloud Document

Description

Rename an existing directory within PlaidCloud Document.

Folder to Rename

First, select the appropriate account from the dropdown menu.

Next, press the Browse button to select the directory to be renamed.

Rename To

Type the new name for the directory.

Examples

No examples yet...

1.1.4.6.19 - Rename Document File

Rename an existing file in PlaidCloud Document

Description

Rename an existing file within PlaidCloud Document.

File to Rename

First, select the appropriate account from the dropdown menu.

Next, press the Browse button to select the file to be renamed.

Rename To

Type the new name for the file.

Examples

No examples yet...

1.1.4.7 - Notification Steps

1.1.4.7.1 - Notify Distribution Group

Send an email to a PlaidCloud distribution group

Description

Send an email notification to a PlaidCloud distribution group. Messages are sent from info@tartansolutions.com. No outbound setup is required.

Select PlaidCloud Distribution List

Select a single distribution list from the drop down menu. Distribution lists can be created using Tools. For details on creating a distribution list, see here: PlaidCloud Tools – Distro.

Message

Specify Subject and Body as desired.

Please note that both Project Variables and Workflow Variables are available for use with this transform, in both the subject line and the message body.

Additionally, standard HTML code is permitted in the body to further customize the look of the email messages.

Examples

In this example, all of the system variables are used. Additionally, there is a small bit of HTML used to format the first line of the body. Executing this transform will send the following email to all members specified in the distribution group:

1.1.4.7.2 - Notify Agent

Notify a PlaidCloud Agent

Description

Notify a PlaidCloud Agent.

Examples

No examples yet...

1.1.4.7.3 - Notify Via Email

Send email notifications

Description

Send email notifications. Messages are sent from info@tartansolutions.com email account. No outbound setup is required.

Email Addresses

Specify any number of email recipients. Acceptable delimiters include semicolon (;) and comma (,).

Message

Specify Subject and Body as desired.

Please note that both Project Variables and Workflow Variables are available for use with this transform, in both the subject line and the message body.

Additionally, standard HTML code is permitted in the body to further customize the look of the email messages.

Attachments

Attaching files to emails is very simple. Select a file or folder from Document to attach. If a folder is selected, the contents of the folder will be attached as individual files. Variable substitution works with paths for better control of file attachments when sending out personalized emails.

Examples

In this example, all of the system variables are used. Additionally, there is a small bit of HTML used to format the first line of the body. Executing this transform will send the following email:

1.1.4.7.4 - Notify Via Log

Write a message to the Analyze workflow log

Description

Write a message to the Analyze workflow log.

Message Parameters

Type the desired message to write to the log. Then select one of three severity levels from the following:

  • Information
  • Warning
  • Error

Please note that both Project Variables and Workflow Variables are available for use with this transform.

Examples

In this example, executing this transform will append an Information item to the log, stating Write a message to the workflow log. I believe you have my stapler, Demo.

1.1.4.7.5 - Notify via Microsoft Teams

Send notifications to Microsoft Teams channels

Adding Microsoft Teams notifications from a workflow is a two part process. The two parts are:

  1. Create a Microsoft Teams external connection
  2. Add Microsoft Teams notification steps to your workflows

Add Microsoft Teams Notification Step to Workflow

Adding Microsoft Teams notification steps to the workflow is the same as adding other steps to a workflow. Upon adding the step, open the step configuration, complete the form, and save it. You can now test your Microsoft Teams notification.

Formatting the Microsoft Teams Message

Teams has many formatting options including adding images and mentioning users. Please reference the Teams Message Text Formatting documentation for details.

Create Microsoft Teams External Connection

This is a one-time setup to allow PlaidCloud to send Microsoft Teams notifications on your behalf. Microsoft Teams allows creation of a Webhook App (a generic way to send a notification over the internet). After creating the Webhook App in Microsoft Teams, add the supplied credentials to PlaidCloud to allow its use.

Microsoft Teams Webhook App Creation

These steps will need to be performed by a Microsoft Teams administrator. Follow the steps outlined here for Creating Incoming Webhook (Microsoft Teams Documentation).

PlaidCloud External Connection Setup

These steps will need to be performed by a PlaidCloud workspace administrator with permissions to create External Data Connections. Follow these steps to create the connection:

  1. Navigate to Analyze > Tools > External Data Connections
  2. Under the + New Connection selection, pick Microsoft Teams Webhook
  3. Complete the name, description, and paste in the webhook url generated during the webhook creation above. The name provided here will be shown as the selection in the workflow step so it should be descriptive if possible.
  4. Select the + Create button

Examples

No examples yet...

1.1.4.7.6 - Notify via Slack

Send Slack notifications

Adding Slack notifications from a workflow is a two part process. The two parts are:

  1. Create a Slack Webhook external connection
  2. Add Slack notification steps to your workflows

Add Slack Notification Step to Workflow

Adding Slack notification steps to the workflow is the same as adding other steps to a workflow. Upon adding the step, open the step configuration, complete the form, and save it. You can now test your Slack notification.

Formatting the Slack Message

Slack has many formatting options including adding images and mentioning users. Please reference the Slack Text Formatting documentation for details.

Create Slack Webhook External Connection

This is a one-time setup to allow PlaidCloud to send Slack notifications on your behalf. Slack allows creation of a Webhook App (a generic way to send a notification over the internet). After creating the Webhook App in Slack, add the supplied credentials to PlaidCloud to allow its use.

Slack Webhook App Creation

These steps will need to be performed by a Slack administrator. Follow these steps to create a Slack Webhook App:

  1. From Slack, open the workspace control menu and select Settings & administration > Manage Apps
  2. Select Custom Integrations from the Apps category list
  3. Select Incoming Webhooks from the list of apps
  4. Select the Add to Slack button
  5. On the next screen, select the Slack Channel you wish to post the messages and continue. This is the default channel that will be used but it can be overridden in each notification including sending DMs to specific individuals.
  6. Copy the webhook URL displayed. This will be used later so keep it in a safe place. It will look something like this: https://hooks.slack.com/services/T04QZ1435/G02TGBFTOP8/K9GZrR2ThdJz1uSiL9YeZxoR
  7. You can customize the appearance, name, and emoji before saving. These customizations are only the defaults and these can be overridden on each notification step within a PlaidCloud workflow.

PlaidCloud External Connection Setup

These steps will need to be performed by a PlaidCloud workspace administrator with permissions to create External Data Connections. Follow these steps to create the connection:

  1. Navigate to Analyze > Tools > External Data Connections
  2. Under the + New Connection selection, pick Slack Webhook
  3. Complete the name, description, and paste in the webhook url provided in step 6 above. The name provided here will be shown as the selection in the workflow step so it should be descriptive if possible.
  4. Select the + Create button

Examples

No examples yet...

1.1.4.7.7 - Notify Via SMS

Send an SMS message

Description

Send an SMS message. Messages are sent from info@tartansolutions.com email account. No outbound setup or data is required.

Carrier and Number

From the Mobile Provider dropdown list, select from hundreds of domestic and international providers. For the convenience of the majority of our customers, USA carriers are listed first, followed by all international options listed alphabetically.

Next, specify a valid phone number. Acceptable formats include the following:

  • 5555555555
  • 555.555-5555
  • 555.555.5555
  • 555-555-5555

Message

Specify Subject and Message as desired.

Please note that both Project Variables and Workflow Variables are available for use with this transform, in both the subject line as well as the message body.WARNING: Standard data rates may apply for recipients.

Examples

No examples yet...

1.1.4.7.8 - Notify Via Twitter

Send a direct message from PlaidCloud

Description

Send a Twitter Direct Message (DM) from @plaidcloud.

Twitter Account

Specify the twitter account to receive the DM from @plaidcloud. This user must be following @plaidcloud to receive the message. It is allowable, although not required, to prefix the username with the at sign (@).

Message

Enter the desired message. Analyze will not permit a value longer than 140 characters.

Please note that both Project Variables and Workflow Variables are available for use with this transform.

Examples

In this example, a DM is sent from @PlaidCloud to @tartansolutions. System variables are used in the message. The final message reads, Analyze Demo is running on #PlaidCloud.

1.1.4.7.9 - Notify Via Web Hook

Send a notification via Web Hook (URL)

Description

Send a notification via Web Hook (URL).

Examples

No examples yet...

1.1.4.8 - Agent Steps

1.1.4.8.1 - Agent Remote Execution of SQL

Execute specified SQL on a remote database through a PlaidLink Agent connection.

Description

Execute specified SQL on a remote database through a PlaidLink Agent connection.

1.1.4.8.2 - Agent Remote Export of SQL Result

Use a specified SQL on a remote database through PlaidLink Agent and export to PlaidCloud

Description

Execute specified SQL on a remote database through a PlaidLink Agent connection and export the result for use by PlaidCloud or other downstream systems.

Examples

No examples yet...

1.1.4.8.3 - Agent Remote Import Table into SQL Database

Import Data into SQL with PlaidLink Agent

Description

Imports specified data into SQL database on a remote system through a PlaidLink Agent connection.

Examples

No examples yet...

1.1.4.8.4 - Document - Remote Delete File

Deletes a remote file system file using a PlaidLink agent installed within the firewall

Description

Deletes a remote file system file using a PlaidLink agent installed within the firewall.

Examples

First, make a selection from the “Agent to Use” dropdown. Next, enter the file or folder path under “File or Folder Path for Delete”. Finally, select “Save and Run Step”.

1.1.4.8.5 - Document - Remote Export File

Exports a file to a remote file system using a PlaidLink agent installed within the firewall

Description

Exports a file to a remote file system using a PlaidLink agent installed within the firewall.

Examples

First, make a selection from the “Agent to Use” dropdown. Next, browse for the file or folder path under “File or Folder to Export”. Then enter the location under “Export Path Destination”. Finally, select “Save and Run Step”.

1.1.4.8.6 - Document - Remote Import File

Imports a remote file system file using a PlaidLink agent installed within the firewall.

Description

Imports a remote file system file using a PlaidLink agent installed within the firewall.

Examples

First, make a selection from the “Agent to Use” dropdown. Next, enter the file or folder path under “File or Folder Path for Import”. Then enter the folder destination under “Folder Destination”. Select the file type from the dropdown. Finally, select “Save and Run Step”.

1.1.4.8.7 - Document - Remote Rename File

Renames or moves a remote file system file using a PlaidLink agent installed within the firewall

Description

Renames or moves a remote file system file using a PlaidLink agent installed within the firewall.

Examples

First, make a selection from the “Agent to Use” dropdown.

Next, enter “Source Path” and “Destination Path”.

Finally, select “Save and Run Step”.

1.1.4.9 - General Steps

1.1.4.9.1 - Pass

This does nothing but may be useful for documenting workflows

Description

The Pass Through transform does not do anything. Its purpose as a placeholder is useful during development or when in need of a separator to section off steps during complex workflows.

1.1.4.9.2 - Run Remote Python

Run a Python file using PlaidLink

Description

This transform will run a Python file using PlaidLink. The Python file is executed on the remote system.

A set of global variables can be passed from the script execution on the remote system.

Examples

No examples yet...

1.1.4.9.3 - User Defined Transform

Use Python and Pandas directly in a workflow

Description

The Standard Workflow Transforms that come with PlaidCloud can typically perform nearly every operation you’ll need. Additionally, these Standard Transforms are continuously optimized for performance, and they provide the most robust data. However, when the standard options, used singularly or in groups, are not able to achieve your goals, you can create User Defined Transforms to meet your needs. Standard Python code is permitted.

Coding with Python is required to create a User Defined Transform. For additional information, please visit the Python website.

User Defined Transforms

To create a new User Defined Function (UDF), open the workflow which needs the custom transform, select the User Defined tab, and click the Add User Defined Function button. Specify an ID for the UDF. Once created, select the Edit function logic icon (far left) to open the “Edit User Defined Function” window.

Alternatively, a previously created User Defined function can be imported using the Import button from within the User Defined tab. Simply press that button and then select the appropriate workflow from the dropdown menu (this menu contains all workflows within the current workspace). Next, select the function(s) to be imported and press the Import Selected Functions button.

Once the function has been created/imported, proceed to the Analyze Steps tab of the workflow and add a User Defined Transform step in the appropriate position, just as you would add a Standard Transform. In the config window, select the appropriate User Defined Function from the dropdown menu.

1.1.4.9.4 - Wait

Pauses workflow execution for a specified period of time

Description

The Wait transform is used to pause processing for a specified duration. This can be especially helpful when waiting for I/O operations from other systems or for debugging workflows during development.

Duration Parameters

Specify a non-negative integer value using the Duration spinner.

Next, specify the unit of time from the dropdown menu. The following units are available for selection:

  • Seconds
  • Minutes
  • Hours

1.1.4.10 - PDF Reporting Steps

1.1.4.10.1 - Report Single

Generate a PDF document based on specific data from the report

Description

Generates a PDF report based on the defined RML template and input data sources for the report.

Examples

No examples yet...

1.1.4.10.2 - Reports Batch

Generate multiple PDF documents based on specific data from each report

Description

Generates many PDF reports based on the defined RML template and input data sources for each report.

Examples

No examples yet...

1.1.4.11 - Common Step Operations

1.1.4.11.1 - Advanced Data Mapper Usage

Using the advanced features of the Data Mapper

Review

Before jumping into the advanced usage capabilities of the Data Mapper, a brief review of the basic functionality will help.

Data Mapper Configuration

The Data Mapper is used to map columns from the source data to the target data table.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

Advanced Usage

Aggregation Options

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. The following summarization options are available:

FunctionDescription
Group ByGroups results by the value
CountNumber of non-null observations in group
Count (including nulls)Number of observations in group
SumSum of values in group
MeanMean of values in group
MedianMedian of values in group
ModeMode of values in group
MinMinimum of values in group
MaxMaximum of values in group
FirstFirst value of values in group using the sorted order
LastLast value of values in group using the sorted order
Standard DeviationUnbiased standard deviation in group
Sample Standard DeviationSample standard deviation in group
Population Standard DeviationPopulation standard deviation in group
VarianceUnbiased variance in group
Sample VarianceSample Variance in group
Population VariancePopulation Variance in group
Advanced Non-Group-BySpecial aggregation selection when using window functions

Pick the appropriate summarization method for the column.

When using a Window Function, select Advanced Non-Group-By as the aggregation method. This special selection is required due to the aggregation inherent in the window function already.

Constants

Specifying a value in the Constant column will override the source column value, if specified, and populate the column with the constant value specified.

Cleaners

The Data Mapper provides a convenient point-and-click cleaner capability to apply conversions to the data within a column.

The cleaning operations include the following categories:

  • Text Trimming
  • Text Formatting
  • Text Transformations
  • Converting to and from NULL values
  • Number Formatting
  • Date Parsing

The result of the cleaner selections are converted into a consolidated expression which is viewable in the Expression information.

Expressions

Expressions in the Data Mapper are one of the most powerful and flexible concepts in PlaidCloud. They provide nearly unlimited flexibility while being exceptionally performant, even on extremely large data.

Expressions are written using Python SQLAlchemy syntax along with a few additional helper functions available in PlaidCloud. This allows PlaidCloud to expose the full set of capabilities of the underlying data warehouse (e.g. Greenplum, SAP HANA, Redshift, etc...) directly. In addition, there are many resources available publicly that provide quick references for use of SQLAlchemy operations. By using standard SQLAlchemy syntax, PlaidCloud avoids the common pitfall of creating yet another domain specific syntax.

The expression editor is opened by double-clicking on the expression cell for the column. Once open, the list of columns are shown on the left while an extensive library of functions are shown on the right.

While it is entirely possible to type the expression directly into the editor, it is normally easier to use the point-and-click function and column selection to get started. The library of functions include the following groups:

  • Conditions
  • Column Specific Conditions
  • Conversions
  • Dates
  • Math
  • Text
  • Summarizations
  • Window Function Operations
  • Arrays
  • JSON
  • PostGIS (Geospatial)
  • Trigonmetry

Once you have completed the expression, save the expression so it will be applied to the column.

View examples and expression functions in the Expressions area.

1.1.4.12 - Allocation By Assignment Dimension

Allocate values based on driver data and assignment dimension

Description

Allocate values based on an assignment dimesion and driver data table.

Allocation By Dimension

Data Table Settings

Assignment Dimesion Hierarchy

Assignment Hierarchy

The Assignment Dimension Hierarchy gives the user the ability to point, click and filter either or both the Values To Allocate Table and Driver Data Table to create targeted allocations. The Assignment Dimension Hierarchy is created by combining dimensions that reference the Values To Allocate Table and the Driver Data Table.

Creating An Assignment Dimension Hierarchy

To create the Assignment Dimension Hierarchy you must first create the dimensions you wish to use to as filters for the Values To Allocate Table and the Driver Data Table. The links below will guide you through creating these dimensions.

Creating Dimensions

Loading Dimensions

Creating The Main Hierarchy

Once the dimensions for the Values To Allocate Table and the Driver Data Table have been created the next step is to decide which of the dimensions for the Values To Allocate Table will serve as the Main Hierarchy for the Assignment Dimension Hierarchy.

Copy this dimension by navigating to the Dimensions tab in PlaidCloud, clicking on the dimension and then selecting Actions and Copy Dimension. When you copy the dimension a pop-up will apprear asking you to enter a name for the copied dimension.

Adding Dimensions To The Assignment Hierarchy

Open the newley created Assignment Dimension, click on the down arrow next to Properties and select New Property.

Assignment Hierarchy Property

This will open the Property Configuration dialog box:

Property Configuration

Assignment Hierarchy Configure Property

  • Property Name - This is normally the name of the dimension that is being added to the Assignment Hierarchy.
  • Property Display - This should be set to "Tag".
  • Property Type - This property informs the allocation step which table Values To Allocate Table or the Driver Data Table this dimension is related too.
    • Source - Is used in conjunction with the Values To Allocate Table.
    • Target - Is used in conjunction with the Driver Data Table.
    • Driver - Is used to filter Driver Data Table for the specific driver selected.
    • Context - When the Values To Allocate Table and the Driver Data Table contain the same dimension then context can be used to specify how the dimensions should relate to one another. Context is often used when both the Values To Allocate Table and the Driver Data Table contain Profit / Cost Centers or Geography.
      • Current - Acts as a passthrough and will filter the Driver Data Table based on the settings of the target dimension. An example would be if the Cotext is based on the Profit Center dimension and the Profit Center target dimension is set to ALL then the driver data would filter on all Profit Centers.
      • Parent - When selected then the parent of the Profit Center in the Values To Allocate Table will be used to filter the driver values in the Driver Data Table. This is useful when driver values are, at times, not available for a specific Profit Center but often are at the parent level.
      • All - When selected then the Profit Center in the Values To Allocate Table will not filter the driver values in the Driver Data Table, driver values for all Profit Centers will be used.
  • Editor Type - This drop down should be set to Select Dimension.

Once the appropriate properties have been selected for the dimension being added to the Assignment Hierarchy select "Edit Configuration".

Dimension Configuration

Assignment Hierarchy Configure

  • Dimension - Use the drop down to select the dimension.
  • Hierarchy - If the dimension selected has alternate hierarchies, then they will appear and be selectable here as well as the main hierarchy.
  • Start Node - If you don't wish the dimension to be displayed from the top node you can select any node within the hierarchy as the node from which the dimension will be displayed.
  • Allow Multiple Selections - If checked the user will be able to select multiple nodes in the hierarchy.
  • Special Cases - When selected the special cases will be available for selection in the dimension drop down menu. They are typically used in Target dimensions.
    • Source - When a dimension is set to Source the allocation will ignore this dimension when it filters the Driver Data Table but the allocated results will include values from the dimension.
    • Current - Can be used when a dimension is shared between Source and Target. When the Target dimension is set to Current then the Driver Data Table will be filtered by the current value of the Source dimension as the allocation runs. An example would be if there are multiple periods in the Values To Allocate Table and the Driver Data Table but you want the allocation to allocate within the periods and not acrocss them. It is also common to use Current on Business Units, Cost Centers and Geographies.
    • Unassigned - When a dimension is set to Unassigned the allocation will ignore this dimension when it filters the Driver Data Table and the allocation result for this dimension will be a Null value.
    • All - When a dimension is set to ALL then the allocation will use all the values in the dimension.

The Values To Allocate Table, Driver Data Table and Allocation Result Table can be selected dynamically or statically.

Dynamic Table Selection

The dynamic table option allows specification of a table using text and variables. This is useful when employing variable driven workflows where the table or view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to a table:

legal_entity/inputs/{current_month}/ledger_values

Static Table Selection

When a specific table is desired as the source, leave the Dynamic box unchecked and select the source table using the dropdown menu.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Values To Allocate Table

This is the table that contains the values that are to be allocated. These are typically cost or revenue values.

Driver Data Table

The driver data table contains the values that the allocation step will use to allocate costs.

Examples:

  • For a supply chain to assign costs to customers you might use delivery data with the number of deliveries or the weight of the deliveries as the driver.
  • For an IT help desk to assign its costs to the departments it supports the driver data be the number of tickets by cost center.

Driver Data Sign Rule

Driver data can contain both positive and negative values. The Driver Data Sign Rule lets you decide how conflicting signs will be handled.

  • Error on conficting signs - Allocation step will produce an error and stop if conflicting signs are encountered.
  • Proceed with warning on conflicting signs - Allocation step will use both negative and positive driver values but will display a warning.
  • Use only positive driver values - Allocation step will only use positive driver values, will ignore negative values.
  • Use only negative driver values - Allocation step will only use negative driver values, will ignore positive values.
  • Use absolute values of driver data - Allocation step will use the absolute values of the driver data.

Intermediate Tables

The Intermediate Tables are created each time an allocation step runs and provides a summary of the allocation processing. The Intermediate Tables provide insight into how the alloation process is running an are used to trouble shoot unexpected results.

  • Paths - Shows the number of unique allocation paths summarized from the assignment hierarchy.
  • Mapping - Shows how each line of the Values To Allocate Table are mapped to the allocation targets.
  • Summary - Shows each rule, as a result of the assignment hierachy, and how many of the records from the Values To Allocate Table match it.

Allocation Result Table

Append Results to Target Table

If this box is checked the allocation results will be appended to the allocation result table. If this box is not checked the allocation results table will be overwritten each time the allocation step runs.

Separate Columns for Allocated Results

If this box is checked then the results table will show the amount of each allocated record as well as the amount actually allocated to each driver record.

Rename Dimension Nodes

If this box is checked when the allocation step runs it will rename the dimension node in the Assignment dimension.

Advanced Options

Thread Count

Sets the number of concurrent operations the allocation step will use.

Chunk Size

Set the number of allocation paths within a thread.

Allocation Source Map

Allocation Source Map

The Allocation Source Map is used to map the columns from the Values To Allocate Table that will be used in the allocation step.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Role

Each column in the data mapper must be assigned a role:

  • Pass Thought - These columns will appear in the allocation results table.
  • Value to Allocate - This is the column that contains the values to be allocated.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Allocation Source Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Driver Data Map

Allocation Driver Data Map

The Allocation Driver Data Map is used to map the columns from the Driver Data Table that will be used in the allocation step.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Role

Each column in the data mapper must be assigned a role:

  • Source Relation - These columns have corresponing columns in the Values To Allocate Table.
  • Allocation Target - The columns will be the target of the allocation step and will appear in the Allocation Result Table.
  • Split Value - This column contains the values that will be used to allocate the values in the Values To Allocate Table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Driver Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Examples

Example 1

Values To Allocate Table

Allocation By Dimension

Driver Data Table

Allocation By Dimension

Assignment Dimension Hierarchy

Allocation By Dimension

Since the Target RC dimension is set to Current the driver data will be filtered by the Source RC values in the Values To Allocation Table. Since the only value in the Source RC is "A", only the driver value records where RC = A will be used in the allocation step.

Allocation Results Table

Allocation By Dimension

Example 2

Values To Allocate Table

Allocation By Dimension

Driver Data Table

Allocation By Dimension

Assignment Dimension Hierarchy

Allocation By Dimension

Since the Target RC dimension is set to ALL the driver data will include all RC values as you can see in the RC column in the Allocation Results Table.

Allocation Results Table

Allocation By Dimension

Example 3

Values To Allocate Table

Allocation By Dimension

Driver Data Table

Allocation By Dimension

Assignment Dimension Hierarchy

Allocation By Dimension

With the Context RC set to ALL and the Target RC set to Source the driver data will include all the RC in the driver data. The Contect RC will override the setting on the Target RC.

Allocation Results Table

Allocation By Dimension

Example 4

Values To Allocate Table

Allocation By Dimension

Driver Data Table

Allocation By Dimension

Assignment Dimension Hierarchy

Allocation By Dimension

With the Context RC set to ALL the driver data will include all the RC in the driver data.

Allocation Results Table

Allocation By Dimension

1.1.4.13 - Allocation Split

Allocate values based on driver data

Description

Allocate values based on driver data.

Allocation Split

Data Table Settings

The Values To Allocate Table, Driver Data Table and Allocation Result Table can be selected dynamically or statically.

Dynamic Table Selection

The dynamic table option allows specification of a table using text and variables. This is useful when employing variable driven workflows where the table or view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to a table:

legal_entity/inputs/{current_month}/ledger_values

Static Table Selection

When a specific table is desired as the source, leave the Dynamic box unchecked and select the source table using the dropdown menu.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Values To Allocate Table

This is the table that contains the values that are to be allocated. These are typically cost or revenue values.

Driver Data Table

The driver data table contains the values that the allocation step will use to allocate costs.

Examples:

  • For a supply chain to assign costs to customers you might use delivery data with the number of deliveries or the weight of the deliveries as the driver.
  • For an IT help desk to assign its costs to the departments it supports the driver data be the number of tickets by cost center.

Driver Data Sign Rule

Driver data can contain both positive and negative values. The Driver Data Sign Rule lets you decide how conflicting signs will be handled.

  • Error on conficting signs - Allocation step will produce an error and stop if conflicting signs are encountered.
  • Proceed with warning on conflicting signs - Allocation step will use both negative and positive driver values but will display a warning.
  • Use only positive driver values - Allocation step will only use positive driver values, will ignore negative values.
  • Use only negative driver values - Allocation step will only use negative driver values, will ignore positive values.
  • Use absolute values of driver data - Allocation step will use the absolute values of the driver data.

Allocation Result Table

Append Results to Target Table

If this box is checked the allocation results will be appended to the allocation result table. If this box is not checked the allocation results table will be overwritten each time the allocation step runs.

Separate Columns for Allocated Results

If this box is checked then the results table will show the amount of each allocated record as well as the amount actually allocated to each driver record.

Allocation Source Map

Allocation Source Map

The Allocation Source Map is used to map the columns from the Values To Allocate Table that will be used in the allocation step.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Role

Each column in the data mapper must be assigned a role:

  • Pass Thought - These columns will appear in the allocation results table.
  • Value to Allocate - This is the column that contains the values to be allocated.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Allocation Source Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

Driver Data Map

Allocation Driver Data Map

The Allocation Driver Data Map is used to map the columns from the Driver Data Table that will be used in the allocation step.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Role

Each column in the data mapper must be assigned a role:

  • Source Relation - These columns have corresponing columns in the Values To Allocate Table.
  • Allocation Target - The columns will be the target of the allocation step and will appear in the Allocation Result Table.
  • Split Value - This column contains the values that will be used to allocate the values in the Values To Allocate Table.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Driver Data Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.14 - Rule-Based Tagging

Tag data based on rules

Description

Rule Based Tagging is used to add attributes contained within a dimesion to a data table.

Rule Based Tagging

Data Table Settings

The Source Table and Tagging Result Table can be selected dynamically or statically.

Dynamic Table Selection

The dynamic table option allows specification of a table using text and variables. This is useful when employing variable driven workflows where the table or view references are relative to the variables specified.

An example that uses the current_month variable to dynamically point to a table:

legal_entity/inputs/{current_month}/ledger_values

Static Table Selection

When a specific table is desired as the source, leave the Dynamic box unchecked and select the source table using the dropdown menu.

Table Explorer is always avaible with any table selection. Click on the Table Explorer button to the right of the table selection and a Table Explorer window will open.

Source Table

This is the table that contains the data that you wish to add the attributes from the Assignment Dimension to.

Rule Based Tagging

Tagging Result Table

The Tagging Result Table will contain the data from the Source Data Table with the attributes contained in the Assignment Dimension Hierarchy.

Rule Based Tagging

Assignment Dimesion Hierarchy

Rule Based Tagging

The Assignment Dimension Hierarchy gives the user the ability to point, click and filter the Source Table to add attributes to the Tagging Result Table. The Assignment Dimension Hierarchy is created by combining dimensions that reference the Source Table.

Creating An Assignment Dimension Hierarchy

To create the Assignment Dimension Hierarchy you must first create the dimensions you wish to use to as filters for the Source Table. The links below will guide you through creating these dimensions.

Creating Dimensions

Loading Dimensions

Creating The Main Hierarchy

Once the dimensions for the Source Table have been created the next step is to decide which of the dimensions for the Source Table will serve as the Main Hierarchy for the Assignment Dimension Hierarchy.

Copy this dimension by navigating to the Dimensions tab in PlaidCloud, clicking on the dimension and then selecting Actions and Copy Dimension. When you copy the dimension a pop-up will apprear asking you to enter a name for the copied dimension.

Adding Dimensions To The Assignment Hierarchy

Open the newley created Assignment Dimension, click on the down arrow next to Properties and select New Property.

Assignment Hierarchy Property

This will open the Property Configuration dialog box:

Property Configuration

Assignment Hierarchy Configure Property

  • Property Name - This is normally the name of the dimension that is being added to the Assignment Hierarchy.

  • Property Display - This should be set to "Tag".

  • Property Type - For Rule Based Tagging property type should be set to Source.

    • Source - Is used in conjunction with the Source Table.
  • Editor Type - This drop down should be set to Select Dimension.

Once the appropriate properties have been selected for the dimension being added to the Assignment Hierarchy select "Edit Configuration".

Dimension Configuration

Assignment Hierarchy Configure

  • Dimension - Use the drop down to select the dimension.
  • Hierarchy - If the dimension selected has alternate hierarchies, then they will appear and be selectable here as well as the main hierarchy.
  • Start Node - If you don't wish the dimension to be displayed from the top node you can select any node within the hierarchy as the node from which the dimension will be displayed.
  • Allow Multiple Selections - If checked the user will be able to select multiple nodes in the hierarchy.
  • Special Cases - Are not used in Rule Based Tagging.

Source Map

Allocation Source Map

The Allocation Source Map is used to map the columns from the Values To Allocate Table that will be used in the allocation step.

Inspection and Populating the Mapper

Using the Inspect Source menu button provides additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Role

Each column in the data mapper must be assigned a role:

  • Pass Thought - These columns will appear in the allocation results table.
  • Value to Allocate - This is the column that contains the values to be allocated.

Deleting Columns

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

Chaging Column Order

To rearrange columns in the target data table, select the desired column(s). You can use either:

  • Bulk Move Arrows: Select the desired move option from the arrows in the upper right
  • Context Menu: Right clikc and select Move to Top, Move Up, Move Down, or Move to Bottom.

Reduce Result to Distinct Records Only

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.

Depending on the situation, you may want to consider use of Summarization instead.

The distinct process retains the first unique record found and discards the rest. You may want to apply a sort on the data if it is important for consistency between runs.

Aggregation and Grouping

To aggregate results, select the Summarize menu option. This will toggle a set of select boxes for each column in the target data table. Choose an appropriate summarization method for each column.

  • Group By
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Count (including nulls)
  • Mean
  • Standard Deviation
  • Sample Standard Deviation
  • Population Standard Deviation
  • Variance
  • Sample Variance
  • Population Variance
  • Advanced Non-Group_By

For advanced data mapper usage such as expressions, cleaning, and constants, please see the Advanced Data Mapper Usage

Source Filters

Table Data Filters

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset Of Data

This filter type provides a way to filter the inbound source data based on the specified conditions.

Apply Secondary Filter To Result Data

This filter type provides a way to apply a filter to the post-transformed result data based on the specified conditions. The ability to apply a filter on the post-transformed result allows for exclusions based on results of complex calcuations, summarizaitons, or window functions.

Final Data Table Slicing (Limit)

The row slicing capability provides the ability to limit the rows in the result set based on a range and starting point.

Filter Syntax

The filter syntax utilizes Python SQLAlchemy which is the same syntax as other expressions.

View examples and expression functions in the Expressions area.

1.1.4.15 - SAP ECC and S/4HANA Steps

1.1.4.15.1 - Call SAP Financial Document Attachment

Calls an SAP ECC Remote Function Call (RFC) designed to attach a file to specified FI document number

Description

Calls an SAP ECC Remote Function Call (RFC) designed to attach a file to specified FI document number.

Examples

RFC Parameters

Select Agent to Use. Select Target Directory from the drop down bar, and browse below for the correct child folder destination for the file. Next, appropriately name the “Target File Name”. Under “Function Call Information”, enter the Function, the Return Value Parameter, and select the parameters.

You can choose to Insert Row or Append Row under the Parameters section, as well as name the parameters and give them values. Choose the Max Concurrent Requests number, and select Wait for RFC to Complete. Save and Run Step.

1.1.4.15.2 - Call SAP General Ledger Posting

Calls an SAP ECC Remote Function Call (RFC) designed to post a journal entry including applicable VAT and Withholding taxes

Description

Calls an SAP ECC Remote Function Call (RFC) designed to post a journal entry including applicable VAT and Withholding taxes. This may also run in test mode which will perform a posting process but not complete the posting. This allows for the collection of detectable errors such as an account being closed or a customer not existing in the specified company code specified. The error checking is robust with the ability to return multiple detected errors in a single test.

Examples

RFC Parameters

Select Agent to Use. Select Target Directory from the drop down bar, and browse below for the correct child folder destination for the file. Next, appropriately name the “Target File Name”. Under “Function Call Information”, enter the Function, the Return Value Parameter, and select the parameters.

You can choose to Insert Row or Append Row under the Parameters section, as well as name the parameters and give them values. Choose the Max Concurrent Requests number, and select Wait for RFC to Complete. Save and Run Step.

1.1.4.15.3 - Call SAP Master Data Table RFC

Calls an SAP ECC Remote Function Call (RFC) designed to access master data tables and retrieves the data in tabular form

Description

Calls an SAP ECC Remote Function Call (RFC) designed to access master data tables and retrieves the data in tabular form. This data is then available for transformation processes in PlaidCloud. It also provides the ability to export the master data table structure to a separate file which includes column names, data types, and column order information.

Examples

RFC Parameters

Select Agent to Use. Select Target Directory from the drop down bar, and browse below for the correct child folder destination for the file. Next, appropriately name the “Target File Name”. Under “Function Call Information”, enter the Function, the Return Value Parameter, and select the parameters.

You can choose to Insert Row or Append Row under the Parameters section, as well as name the parameters and give them values. Choose the Max Concurrent Requests number, and select Wait for RFC to Complete. Save and Run Step.

1.1.4.15.4 - Call SAP RFC

Calls an SAP ECC Remote Function Call (RFC) and retrieves the data in tabular form

Description

Calls an SAP ECC Remote Function Call (RFC) and retrieves the data in tabular form. This data is then available for transformation processes in PlaidCloud.

Examples

RFC Parameters

Select Agent to Use. Select Target Directory from the drop down bar, and browse below for the correct child folder destination for the file. Next, appropriately name the “Target File Name”. Under “Function Call Information”, enter the Function, the Return Value Parameter, and select the parameters.

You can choose to Insert Row or Append Row under the Parameters section, as well as name the parameters and give them values. Choose the Max Concurrent Requests number, and select Wait for RFC to Complete. Save and Run Step.

Advanced Value Iteration

You can select “No Iterators” at the top of this tab and then select Save and Run Step if desired, or you can specify.

Here, you can select “Specify Argument Values” to Iterate Over and create arguments to then go to the Iteration Value.

Next to Select Iterator Argument to Edit Values, there is the option to Insert Tow, Append Row, Delete Row, Move Down Row, or Move to Bottom Row. Below you can choose Range Iterators using the same drop down menu. The last section is titled “Exclusions for Selected Range Iteration” with the same options per row to add, delete, etc. The excluded values can be entered below. Save and Run Step.

1.1.4.16 - SAP PCM Steps

1.1.4.16.1 - Create SAP PCM Model

This feature allows you to create a blank SAP PCM Model

Description

Creates a blank SAP Profitability and Cost Management (PCM) model.

Our Credentials

Tartan Solutions is an official SAP Partner and a preferred vendor of services related to SAP PCM model design and implementation.

Examples

Select Agent to Use from the dropdown. Enter “Model Name” and select “Model type” from the dropdown (both of which are in the “Model Information” section). Check the “Wait for Copy to Complete” check box, then click “Save and Run Step”.

1.1.4.16.2 - Delete SAP PCM Model

Deletes SAP Profitability and Cost Management (PCM) models matching the search criteria

Description

Deletes SAP Profitability and Cost Management (PCM) models matching the search criteria. Deleting models using this transform allows deletion of many models without having to monitor the process.

Our Credentials

Tartan Solutions is an official SAP Partner and a preferred vendor of services related to SAP PCM model design and implementation.

Examples

Select “Agent to Use” from the dropdown. Select your desired “Model Search Method”. For this example, we’ve selected “Exact Match”. Enter “Model Search Text” (what you are looking for) under “Model Name Information” and decide if the search is case sensitive or not (if so, check the check box). Finally, check the “Wait for Deletion to Complete” and click “Save and Run Step”.

1.1.4.16.3 - Calculate PCM Model

Start your PCM Model Calculation Process

Description

Starts SAP Profitability and Cost Management (PCM) model calculation process.

Our Credentials

Tartan Solutions is an official SAP Partner and a preferred vendor of services related to SAP PCM model design and implementation.

Examples

Select Agent to Use from the dropdown, enter model name in the “Model Name” field, click the “Wait for Calculation to Complete” check box (if desired), then click “Save and Run Step”.

1.1.4.16.4 - Copy SAP PCM Model

Copy an SAP PCM model

Description

Copies an SAP Profitability and Cost Management (PCM) model.

Our Credentials

Tartan Solutions is an official SAP Partner and a preferred vendor of services related to SAP PCM model design and implementation.

Example

Select Agent to Use from the dropdown, enter “From Model Name” and “To Model Name” in the “Model Information” field, click the “Wait for Copy to Complete” check box, then click “Save and Run Step”.

1.1.4.16.5 - Copy SAP PCM Period

Copy period within an SAP PCM model

Description

Copies an SAP Profitability and Cost Management (PCM) model period within the same model.

Our Credentials

Tartan Solutions is an official SAP Partner and a preferred vendor of services related to SAP PCM model design and implementation.

Examples

Select Agent to Use from the dropdown, enter “Model Name”, “From Period Name” and “To Period Name” in the “Model Information” field. Click the “Wait for Copy to Complete” check box, then click “Save and Run Step”.

1.1.4.16.6 - Copy SAP PCM Version

Copy your a version within an SAP PCM model

Description

Copies an SAP Profitability and Cost Management (PCM) model version within the same model.

Our Credentials

Tartan Solutions is an official SAP Partner and a preferred vendor of services related to SAP PCM model design and implementation.

Examples

Select Agent to Use from the dropdown, enter “Model Name”, “Origin Period Name”, and “Destination Period Name” in the “Model Information” field. Click the “Wait for Copy to Complete” check box, then click “Save and Run Step”.

1.1.4.16.7 - Rename SAP PCM Model

Renames your SAP Profitability and Cost Management model

Description

Renames an SAP Profitability and Cost Management (PCM) model.

Our Credentials

Tartan Solutions is an official SAP Partner and a preferred vendor of services related to SAP PCM model design and implementation.

Examples

Select Agent to Use from the dropdown, enter “From Model Name” and “To Model Name” in the “Model Information” field, click the “Wait for Copy to Complete” check box, then click “Save and Run Step”.

1.1.4.16.8 - Run SAP PCM Console Job

Launch you PCM model onto the PCM server

Description

Launches an SAP Profitability and Cost Management (PCM) Console process on the PCM server.

Our Credentials

Tartan Solutions is an official SAP Partner and a preferred vendor of services related to SAP PCM model design and implementation.

Examples

Select Agent to Use from the dropdown, enter console file path in the “Console File Path” field, click the “Wait for Console Job to Complete” check box (if desired), then click “Save and Run Step”.

1.1.4.16.9 - Run SAP PCM Hyper Loader

Load your PCM model using direct table loads

Description

Loads an SAP Profitability and Cost Management (PCM) model using direct table loads. This process is significantly faster than Databridge. The Hyper Loader supports virtually all of the current PCM data, assignment, and structure tables.

This is the current list of available loading targets:

  • Activity Aliases
  • Activity Dimensional Hierarchy
  • Activity Driver Aliases
  • Activity Driver Dimensional Hierarchy
  • Activity Driver Value
  • BOM Default Makeup
  • BOM External Unit Rate
  • BOM Makeup
  • BOM Production Volume
  • BOM Units Sold
  • Cost Object 1 Aliases
  • Cost Object 1 Dimensional Hierarchy
  • Cost Object 2 Aliases
  • Cost Object 2 Dimensional Hierarchy
  • Cost Object 3 Aliases
  • Cost Object 3 Dimensional Hierarchy
  • Cost Object 4 Aliases
  • Cost Object 4 Dimensional Hierarchy
  • Cost Object 5 Aliases
  • Cost Object 5 Dimensional Hierarchy
  • Cost Object Assignment
  • Cost Object Driver
  • Line Item Aliases
  • Line Item Detail Aliases
  • Line Item Detail Dimensional Hierarchy
  • Line Item Detail Value
  • Line Item Dimensional Hierarchy
  • Line Item Direct Activity Assignment
  • Line Item Resource Driver Assignment
  • Line Item Value
  • Period Aliases
  • Period Dimensional Hierarchy
  • Resource Driver Aliases
  • Resource Driver Dimensional Hierarchy
  • Resource Driver Split
  • Resource Driver Value
  • Responsibility Center Aliases
  • Responsibility Center Dimensional Hierarchy
  • Revenue
  • Revenue Aliases
  • Revenue Dimensional Hierarchy
  • Service Aliases
  • Service Dimensional Hierarchy
  • Spread Aliases
  • Spread Dimensional Hierarchy
  • Spread Value
  • Version Aliases
  • Version Dimensional Hierarchy
  • Worksheet 1 Aliases
  • Worksheet 1 Dimensional Hierarchy
  • Worksheet 2 Aliases
  • Worksheet 2 Dimensional Hierarchy
  • Worksheet Value

Our Credentials

Tartan Solutions is an official SAP Partner and a preferred vendor of services related to SAP PCM model design and implementation.

Examples

Select Agent to Use from the dropdown. Enter model name and select the load package storage path location, then select the child folder desired from within. Use the Table Data Selection below to select the source table model and the target load table. Inspect source>>propagate both sides of the table will reveal the data. Click “Save and Run Step” when the data is entered and you have added any expressions.

1.1.4.16.10 - Stop PCM Model Calculation

This function stops a PCM Model calculating process

Description

Stops an SAP Profitability and Cost Management (PCM) model calculation process.

Our Credentials

Tartan Solutions is an official SAP Partner and a preferred vendor of services related to SAP PCM model design and implementation.

Examples

Select Agent to Use from the dropdown, enter “Model Name”, click the “Wait for Copy to Complete” check box, then click “Save and Run Step”.

1.1.5 - Scheduled Workflows

There are 2 ways to schedule actions. The first is within the workflow itself by ordering, enabling, and applying conditionals to workflow steps. The second is within the event scheduler, which you can reach through Analyze->Tools menu->Event Scheduler. The Event Scheduler allows for ordering and applying conditionals to one or more workflows.

1.1.5.1 - Event Scheduler

Create and organize a scheduled recurring event

Description

Scheduling specific workflows can be a useful organization tool, so PlaidCloud provides the ability to do just that. Using event scheduler, you can schedule a workflow to run by month, day, hour, minute, or even on a financial workday schedule. If using the financial workday schedule approach, PlaidCloud also allows configuration of holiday schedules using various holiday calendars.

The Events Table will indicate whether the event is scheduled by month, day, hour and minute, or workday under the event description column.

To view events:

  1. Open Analyze
  2. Select “Tools”
  3. Click “Event Scheduler”

This will open the Events Table showing all the current events configured for the workspace.

Creating an Event

To create an event:

  1. Open Analyze
  2. Select “Tools”
  3. Click “Event Scheduler”
  4. Click “Add Scheduled Event”
  5. Complete the required fields
  6. Click “create”

Limit Running: this section allows you to schedule an event to run for a specific time period and a specific number of times.

Otherwise, you can set the workflow to run using the classic schedule approach.

To use the classic schedule approach:

  1. Click the “Event Schedule” tab of the Event table
  2. Under the “Schedule type” select “Use Classic Schedule”
  3. Select the specific months, hours, minutes, and days you want the workflow to run

To set the workflow to run using the workday schedule approach:

  1. Click the “Event Schedule” tab of the Event table
  2. Under the “Schedule type” select “Use Workday Schedule”
  3. Choose the workday you would like the workflow to run on

Editing an Event

To edit an event:

  1. Open Analyze
  2. Select “Tools”
  3. Click “Event Scheduler”
  4. Click the edit icon
  5. Adjust desired fields
  6. Click “Update”

Deleting an Event

To delete an event:

  1. Open Analyze
  2. Select “Tools”
  3. Click “Event Scheduler”
  4. Click the delete icon
  5. Click delete again

Pausing an Event

To temporarily pause an event:

  1. Open Analyze
  2. Select “Tools”
  3. Click “Event Scheduler”
  4. Click the edit icon
  5. Uncheck the “Active” checkbox
  6. Click “Update”

Saving the event after unchecking the active box means the event will no longer run on the specified schedule until it’s reactivated.

Running Events on Demand

To run an event immediately:

  1. Open Analyze
  2. Select “Tools”
  3. Click “Event Scheduler”
  4. Select the desired event or events
  5. Click “Run Selected Events”

1.1.6 - External Data Source and Service Connectors

Data Source Connectors are the means through which data connections are made to external systems to import or export data in or out of PlaidCloud.

1.1.6.1 - Data Connections

Use this table reference for more information on external system connections and databases

Description

PlaidCloud connects to external systems by using various data connections directly or through PlaidLink agents.

For more details on each data connection type, please navigate to the specific data connection documentation.

Relational Databases

Greenplum

ParameterValue
Connection TypeDatabase
Referencegreenplum

Microsoft SQL Server

ParameterValue
Connection TypeDatabase
Referencesqlserver

MySQL

ParameterValue
Connection TypeDatabase
Referencemysql

ODBC

ParameterValue
Connection TypeDatabase
Referenceodbc

Oracle

ParameterValue
Connection TypeDatabase
Referenceoracle

Postgres

ParameterValue
Connection TypeDatabase
Referencepostgres

Amazon Redshift

ParameterValue
Connection TypeDatabase
Referenceredshift

SAP HANA

ParameterValue
Connection TypeDatabase
Referencehana

Exasol

ParameterValue
Connection TypeDatabase
Referenceexasol

IBM DB2

ParameterValue
Connection TypeDatabase
Referencedb2

Informix

ParameterValue
Connection TypeDatabase
Referenceinformix

Hadoop Based Databases

Hive

ParameterValue
Connection TypeDatabase
Referencehive

Presto

ParameterValue
Connection TypeDatabase
Referencepresto

Spark

ParameterValue
Connection TypeDatabase
Referencespark

Team Collaboration Tools

Microsoft Teams

ParameterValue
Connection TypeNotification
Referenceteams

Slack

ParameterValue
Connection TypeNotification
Referenceslack

Cloud Services

OAuth Connection

ParameterValue
Connection TypeoAuth
Referenceoauth

Quandl

ParameterValue
Connection TypeQuandl
Referencequandl

Google Big Query

ParameterValue
Connection TypeGoogle Big Query
Referencegbq

Google Spreadsheet

ParameterValue
Connection TypeGoogle Spreadsheet
Referencegspread

Oracle EBS utilizes the standard Oracle database connection specified above. This connection provides the connectivity to query, load, and execute PL/SQL programs in Oracle.

If the EBS instance has the REST API interface available, this can be accessed using the same approach as Oracle Cloud described below.

Oracle Cloud utilizes standard RESTful requests to perform queries, data loading, and other operations. A REST connection using OAuth2 tokens is used for these interactions. This uses the standard oAuth connection specified above.

Salesforce utilizes standard RESTful requests to perform all operations. A REST connection using OAuth2 tokens is used for these interactions. This uses the Salesforce specific connection type.

Workday utilizes standard RESTful requests to perform all operations. A REST connection using OAuth2 tokens is used for these interactions. This uses the standard oAuth connection specified above.

ParameterValue
Connection TypeJD Edwards Legacy
Referencejde_legacy

JD Edwards utilizes the standard Oracle database connection specified above. This connection provides the connectivity to query, load, and execute PL/SQL programs in Oracle.

ParameterValue
Connection TypeInfor
Referenceinfor

SAP Analytics Cloud

ParameterValue
Connection TypeSAP Analytics Cloud
Referencesap_sac

SAP ECC

ParameterValue
Connection TypeSAP ECC
Referencesap_ecc

SAP Profitability and Cost Management (PCM)

ParameterValue
Connection TypeSAP PCM
Referencesap_pcm

SAP Profitability and Performance Management (PaPM)

ParameterValue
Connection TypeSAP PaPM
Referencesap_papm

1.1.7 - Allocation Assignments

Allocations enable values (typically costs) to be split to a more-granular level by applying a driver. Allocations are used for a multitude of purposes, including but not limited to Activity-Based Costing, IT & Shared Service Chargeback, and the calculation of a fully loaded cost to produce and provide a good or service to customers.

1.1.7.1 - Getting Started

1.1.7.1.1 - Allocations Quick Start

Set up a basic allocation quickly

Content coming soon...

1.1.7.1.2 - Why are Allocations Useful

A practical understanding of allocations and how they are helpful

Content coming soon...

1.1.7.2 - Configure Allocations

1.1.7.2.1 - Configure an Allocation

Set up a cost allocation transform and manage assignments

Purpose

Allocations enable values (typically costs) to be shredded to a more-granular level by applying a driver. Allocations are used to for a multitude of purposes. including but not limited to Activity-Based Costing, IT & Shared Service Chargeback, calculation of fully loaded cost to produce and provide a good or service to customers, etc. They are a fundamental tool for financial analysis, and a cornerstone for managerial reporting operations such as Customer & Product Profitability. They are also a useful construct for establishing and managing global Intercompany Transfer Prices for goods and services.

Setting up the Allocation transform

From a practical purpose, allocations are set up in PlaidCloud in similar fashion as other data transforms such as joins and lookups. Four configuration parameters must be set in order for an Allocation transform to succeed.

  1. Specify Preallocated Data: Specify the preallocated data table in the Values To Allocate Table section of the allocation transform.
  2. Specify Driver Data: Driver data will serve as the basis for the ratios used in the allocation. Choose the driver data table in the Driver Data Table section of the allocation transform.
  3. Specify the Results Table: Post-allocated data must be stored in a table. Specify the table in the Allocation Result Table section of the allocation result section of the transform.
  4. Specify the Assignment Dimension: Allocations require an assignment dimension, whose purpose is to provide the prescription for how each record or set of records in the preallocated will be assigned. Specify the the assignment dimension in the Assignment Dimension Hierarchy section of the allocation transform.

Key Concepts

The sum of values in an allocated dataset should tie out to those of the pre-allocated source data

Allocations are accessible in PlaidCloud as a transform option. To set up an allocation, first, set up assignments, and then configure an allocation transform to use the assignments to allocate inbound records using a specified driver table.

Assignments are special dimensions. They are accessed within the Dimensions section of a PlaidCloud Project.

To set up an assignment dimension, perform the following steps:

  1. From the project screen, Navigate to the Dimensions tab
  2. Create a new dimension

1.1.7.2.2 - Recursive Allocations

How to set up and manage recursive allocations

Content coming soon...

1.1.7.3 - Results and Troubleshooting

1.1.7.3.1 - Allocation Results

Understand and analyze allocation results

Content coming soon...

1.1.7.3.2 - Troubleshooting Allocations

Understand how to troubleshoot allocations when the results are not as expected

Stranded Cost

Stranded cost is....

Over Allocation of Cost

Over allocation of cost is when you end up with more output cost...

Incorrect Allocation of Cost

Incorrect allocation of costs happens when...

1.1.8 - Data Warehouse Service

The PlaidCloud Data Warehouse Service (DWS) is the platform that PlaidCloud stores its data on. The DWS is based on Greenplum, a warehouse suitable for big data analytics and traditional data warehouse operations. It's extensive analytical optimizations, array of indexing types, highly-flexible compression, and availability of both row-based and columnar storage models makes it ideal for wide array of uses.

1.1.8.1 - Getting Started

Getting started with the PlaidCloud Data Warehouse Service

About

The PlaidCloud Data Warehouse Service (DWS) stands on the shoulders of great technology. The service is based on Greenplum, a warehouse suitable for big data analytics and traditional data warehouse operations. It's extensive analytical optimizations, array of indexing types, highly-flexible compression, and availability of both row-based and columnar storage models makes it ideal for wide array of uses.

The PlaidCloud DWS continues our goal of providing the best open source options for our customers to eliminate lock-in while also providing services as turn-key solutions.

Managing, upgrading, and maintaining a data warehouse requires special skills and investment. Both can be hard to find when you need them. The PlaidCloud service eliminates that need while still providing deep technical access for those that need or want total control. Since Greenplum is based on PostgreSQL, it is nearly 100% compatible with current PostgreSQL operations.

Key Benefits

Always on

The PlaidCloud DWS provides always-on query access. You don't have to schedule availability or incur additional costs for usage outside the expected time.

This also means there is no first-query delay and no cache to warm up before optimal performance is achieved.

Read and Write the way you expect

The PlaidCloud DWS operates like a traditional database so you don't have to decide which instances are read-only or have special processes to load data from a write instance. All instances support full read and write with no special ETL or data loading processes required.

If you are used to using traditional databases, you don't need to learn any new skills or change your applications. The DWS is a drop-in replacement for Greenplum as well as a replacement for PostgreSQL, CockroachDB, yugabyteDB and other databases that use the PostgreSQL Wire Protocol. If you are coming from other databases such as Oracle, MySQL or Microsoft SQL Server then some adjustments to your query logic may be necessary but not to the overall process.

Since SAP HANA and Amazon Redshift use the PostgreSQL dialect, those seeking a portable alternative will find PlaidCloud DWS a straightforward option.

Economical

With usage based billing, you only pay for what you use. There are no per-query or extra processing charges. High performance storage with triple redundancy, incredible IOPS, wide data throughput, and out-of-band backups are all standard at a reasonable price.

We eliminate the headache of having to choose different data warehousing tiers based on optimizing storage costs. We offer three different storage options at a table level which all interoperate and can be used together in queries:

  • HOT - This is the highest performance storage available and is suitable for analytical data that is frequently accessed or needs to be ultra-responsive
  • WARM - This provides cost savings over Hot storage while maintaining good performance and no changes to SQL commands
  • COLD - This is the most economical by utilizing cloud storage

Highly performant

While network attached storage has been able to achieve significant performance, it still can't come close to local disk. Using local disks for storage is complicated while operating in cloud environments but our goal was to provide an uncompromising data warehouse service that can achieve the same or better performance as a hand-built data warehouse cluster.

We also extensively tested optimal compute, networking, and RAM configurations to achieve maximum performance. As new technology and capabilities become available, our goal is to incorporate features that increase performance.

Real-time backups without impacting performance

One of the more complex processes with data warehouse clusters is backups. While seemingly simple, achieving a consistent snapshot of data across many nodes while not interfering in the execution of multiple queries is actually quite complex. Doing this without impacting performance of the database is even harder.

Thankfully, you don't have to worry about all that complexity. You can set the frequency of backups you desire and it is all handled automatically for you. While all data is triple redundant, backups are necessary in the event a destructive user action takes place such as accidentally deleting data or dropping a table. Having a backup allows for recovery of that prior state.

Scale out and scale up capable

The ability to both scale up and scale out are essential for a data warehouse, especially when it is performing analytical processes.

Scaling up means more simultaneous queries can occur at once. This is useful if you have many users or applications that require many concurrent processes.

Scaling out means more compute power can be applied to each query by breaking the data processing up across many CPUs. This is useful on large data where summarizations or other analytical processes such as machine learning (MADLib) or geospatial (PostGIS) analysis is required.

The PlaidCloud DWS allows scale expansion either on-demand or based on pre-defined events/metrics.

Integrated with PlaidCloud Analyze for Low/No Code operations

Analyze and Dashboards are quickly connected to any PlaidCloud DWS. This provides point-and-click operations to automate data related activities as well as building beautiful visualizations for reporting and insightful analysis.

From an Analyze project, you can select any DWS instance. This also provides the ability for Analyze projects to switch among DWS instances to facilitate testing and Blue/Green upgrade processes. It also allows quickly restoring an Analyze Project from a DWS point-in-time backup.

Clone

Making a clone of an existing warehouse performs a complete copy of the source warehouse. When a clone is made it has nothing shared with the original warehouse and therefore is a quick way to isolate a complete warehouse for testing or even a live archive at a specific point in time.

Another important feature is that you can clone a warehouse to a different data center. This might be desireable if global usage shifts from one region to another or having a copy of a warehouse in various regions for development/testing improves internal processes.

Restore

A new warehouse instance is easily restored from an existing backup. The backup frequency is adjustable for each warehouse instance. Those backups allow for a point-in-time restoration.

Prioritize queries within the warehouse

The PlaidCloud DWS provides a straightforward way to control the priority of queries within a single DWS instance. Through use of Resource Queues, certain roles can be granted higher priority. This differs from other warehouse services that require separate warehouse instances to delineate different priority access based on resource isolation/dedication.

By using Resource Queues, you can achieve your business requirements (e.g. high priority dashboards for executives) while using a single DWS instance. This allows you to control resource usage and eliminates the need to have large amounts of idle resources dedicated to low usage (high importance) scenarios.

Large number of connectors available

Since PlaidCloud DWS is based on PostgreSQL technology, virtually all PostgreSQL connectors and clients will work out-of-the-box. With a vibrant PostgreSQL community, new capabilities, adapters, and connectors are released frequently.

Some examples:

Foreign table access

Already have data in another database or in cloud storage? No worry, you can connect to it directly and include the data in complex queries such as joins and Common Table Expressions. Use of foreign tables also include predicate push-down so conditions are applied before the data is moved to the DWS instance.

This enables use of existing data sources which means you can choose to gradually migrate them to a DWS instance or choose to keep the data where it exists forever.

Note that performance will not be as good as having the data in the DWS instance since it is subject to network speeds and the speed of the foreign data source operations.

This capability also enables communication across different PlaidCloud DWS instances. While it would be ideal to have all data in a single warehouse instance, there are certainly situations where this is not always practical.

Well understood and mature

While much of data warehousing activity is fairly straightforward, there still remains a large body of work that pushes the bounds of a database. When operating at maximum capacity, many facets come into play including the maturity and optimization of all the underlying processes. Since PlaidCloud DWS is built on very mature technology in use for decades, substantial performance and stability optimizations are in place.

With a well understood and mature technical foundation, there is a far less likelihood of strange failure modes and when unusual events do occur an answer is likely a Google search away.

Tuning queries is sometimes necessary for highly complex queries. There are substantial resources available that help explain, analyze, and optimize queries in PostgreSQL and Greenplum systems. We all wish that the days of hand tuning queries were no longer necessary. The questions we ask of our data and required processing to determine a result can often have orders of magnitude time improvements by adjusting aspects of the query where even the most intelligent query planner will struggle.

When trying to squeeze out the best performance you want to rely on known patterns and examples.

Web or Desktop SQL Client Access

A web SQL console is provided within PlaidCloud. It is a full featured SQL client so it supports most use cases. However, for more advanced use cases, a desktop client or other service may be desired. The PlaidCloud DWS uses standard security and access controls enabling remote connections and controlled user permissions.

Access options allow quick and easy start-up as well as ongoing query and analytics access. A firewall allows control over external access.

DBeaver provides a nice free desktop option that has a Greenplum driver to fully support PlaidCloud DWS instances. They also provide a commercial version called DBeaver Pro for those that require/prefer use of licensed software.

1.1.8.2 - Pricing

PlaidCloud Data Warehouse Service Pricing

Usage Based

The cost of a PlaidCloud Data Warehouse instance is determined by a limited number of factors that you control. All costs incurred are usage based.

The factors that impact cost are:

  • Concurrency Factor - The size of each compute node in your warehouse instance
  • Parallelism Factor - The number of nodes in your warehouse instance
  • Allocated Storage - The number of Gigabytes of storage consumed by your warehouse instance
  • Network Egress - The number of Gigabytes of network egress. Excludes traffic to PlaidCloud applications within the same region. Ingress is always free.
  • Backup Retention Period - How many days, weeks, or months to retain backups beyond 30 days

Storage, backups, and network egress are calculated in gigabytes (GB), where 1 GB is 2^30 bytes. This unit of measurement is also known as a gibibyte (GiB).

All prices are in USD. If you are paying in another currency please convert to your currency using the appropriate rate.

Billing is on an hourly basis. The monthly prices shown are illustrative based on a 730 hour month.

Controlling Factors

Concurrency Factor

Compute TypeHourly Cost (streams/hr)Monthly Cost (streams/month)
StandardContact UsContact Us

Concurrency determines how many simultaneous queries are handled by the DWS instance. This is expressed as a number of process streams. There is not a 1:1 relationship between streams and query capacity since a single stream can handle multiple simultaneous queries. However, as the number of concurrent requests increase, the query duration may exceed the desired response time and an increase in the concurrency factor will help.

From a conceptual standpoint you can view processing streams as vCPUs used to process queries.

The default concurrency factor is 2, which is a good starting point if you are unsure of your needs. It can be adjusted from 1 to 14. If your needs exceed 14, please contact us to increase your concurrency limit.

Parallelism Factor

There is no additional cost per node. The compute cost of the DWS instance is the product of concurrency and parallelism plus the master node.

Parallelism determines how many nodes are in the DWS instance. This is expressed as node count. The number of nodes determines how much compute power can be applied to any single query. By increasing the node count, the computational part of the query can be spread out over many process streams. In addition, the storage throughput is multiplied by the number of nodes, which is very valuable when dealing with large datasets.

For example, if the maximum theoretical write throughput of a single node was 4 TB/sec, a warehouse with 8 nodes would have a theoretical write throughput of 8 x 4 TB/sec = 32 TB/sec. There are many factors that impact write speed including compression level, indexes, table storage type, network overhead, etc... but in general, nodes apply a multiplying factor to data throughput speed.

Allocated Storage

Three types of table storage options are available in a PlaidCloud DWS:

  • Hot
  • Warm
  • Cold
Storage TypeHourly Cost (GB/hr)Monthly Cost (GB/month)
HotContact UsContact Us
WarmContact UsContact Us
ColdContact UsContact Us

These storage options can be applied on a table-by-table basis so you can optimize storage costs within a DWS with no change to existing queries.

Hot Storage

This is the most common storage type for a database. It is the default storage type for data in the DWS instance.

Storage cost is computed based on the allocated Hot storage space for the warehouse instance. Storage is allocated to the warehouse on-demand up to the specified limit set by you. The current limit is 4.5TB per node. If your needs exceed 4.5TB per node, please contact us to increase your node storage limit.

Warm Storage

Warm Storage provides an excellent trade-off between cost and performance. Warm storage is ideal for data used in batch processing, infrequently accessed historical data, or other general data that does not have high performance requirements. Warm storage provides good performance and does not have per node size limits.

Cold Storage

Cold storage is significantly less expensive than both Hot and Warm but it does have limitations. It is not included in the backup snapshots. It has significantly lower performance and is generally not suitable for queries that must be responsive.

However, for low usage or archival data it can provide a substantial cost savings while still enabling real-time access to the data, albeit at a slower query speed. This is a significant improvement over using ETL processes to archive table data and then needing to reconstitute it later when required through additional ETL processes.

For example, if the current and prior year financial data is stored in high performance storage to handle the vast majority of queries, prior years could be stored in Cold storage. When access to several years is needed, exceeding what is in hot storage, then a simple UNION query of the hot data and the cold data will return the full dataset. This eliminates complex data archival processes by keeping all the data readily available in the same DWS instance while optimizing storage costs.

Network Egress

Source GeolocationEgress (per GB)Ingress (per GB)
Worldwide Locations (Default)$0.13Free
China Locations (excluding Hong Kong)$0.26Free
Australia Locations$0.20Free

Network egress is calculated based on the egress traffic from your PlaidCloud Workspace. In terms of the egress traffic from a DWS instance, traffic to PlaidCloud applications in the same region such as Analyze and Dashboard are excluded. However, if you are connecting directly to the DWS instance through the external access point, egress charges will apply. In addition, if you access DWS instances from different regions using PlaidCloud applications then egress charges will apply.

If you connect between DWS instances in the same region using internal network routing there are no egress charges. However, if you connect using the external endpoint then egress charges will apply.

There is no charge for ingress traffic.

Backup Retention Period

Retention PeriodHourly Cost (GB/hr)Monthly Cost (GB/month)
Scheduled Backups - First 30 DaysFreeFree
Scheduled Backups - Retention (after 30 days)$0.000274$0.02
On-Demand Backup Snapshots$0.000274$0.02

By default, all scheduled backups are stored for 30 days free of charge. Setting the retention period beyond 30 days will incur additional storage retention charges. Backup retention storage cost is based on the allocated storage size of the DWS instance when the backup was taken and the duration for which you would like to retain each backup beyond 30 days.

For example, if the DWS instance allocated storage is 200GB and the additional retention period is 7 days, the backup storage cost is computed as 200GB x 7 Days = 1,400 GB Days.

1,400 GB days x 24 hours/day x $0.000274 per GB/hr = $9.20

On-demand backups can be taken at any time and will incur backup storage fees immediately. There is a minimum of 30 days billing applied to on-demand backups even if they are deleted within the 30 days.

By default, on-demand backups do not have a retention period set. If you make on-demand backups without a retention period, you must manually delete the backup or backup storage fees will continue to accrue.

If you put a hold on a backup to prevent deletion when the retention period expires, you must remove that hold or manually delete the backup. If the hold remains you will continue to incur backup storage fees.

Premium Capabilities Included

PlaidCloud DWS provides several additional features as part of each DWS instance that provide valuable capabilities without additional fees. Each DWS instance includes MADLib, PostGIS, and PXF.

The MADLib and PostGIS libraries allow you to perform machine learning and geospatial analysis without moving your data or using other external tools. PXF provides the ability to query external data files, whose metadata is not managed by the database. PXF includes built-in connectors for accessing data that exists inside HDFS files, Hive tables, HBase tables, JDBC-accessible databases and more. Users can also create their own connectors to other data storage or processing engines.

1.2 - Dashboards

Dashboards are customizable, dynamic workspaces where data and results can be visually displayed using multiple different types of charts and graphs. To access the Dashboards, click on the chart icon/Dashboards in the left menu.

1.2.1 - Learning About Dashboards

Understanding Dashboard features and how to troubleshoot errors and warnings

Description

Dashboards support a wide range of use cases from static reporting to dynamic analysis. Dashboards support complex reporting needs while also providing an intuitive point-and-click interface. There may be times when you run into trouble. A member of the PlaidCloud Support Team is always available to assist you, but we have also compiled some tips below in case you run into a similar problem.

Common Questions and Answers for Dashboard

Preferred Browser

Due to frequent caching, Google Chrome is usually the best web browser to use with Dashboard. If you are using another browser and encounter a problem, we suggest first clearing the cache and cookies to see if that resolves the issue. If not, then we suggest switching to Google Chrome and seeing if the problem recurs.

Sync Delay

  • Problem: After unpublishing and publishing tables in the Dashboards area, the data does not appear to be syncing properly.
  • Solutions: Refresh the dashboard. Currently, old table data is cached, so it is necessary to refresh the dashboard when rebuilding tables.

Table Sync Error

  • Problem: After recreating a table using the same published name as a previous table, the table is not syncing, even after hitting refresh on the dashboard, publishing, unpublishing, and republishing the table.
  • Solutions: Republish the table with a different name. The Dashboard data model does not allow for duplicate tables, or tables with the same published name and project ID.

Cache Warning

  • Problem: A warning popped up on the upper right saying “Loaded data cached 3 hours ago. Click to force-refresh.”
  • Solutions: Click on the warning to force-refresh the cache. You can also click the drop-down menu beside “Edit dashboard” and select “Force refresh dashboard” there. Either of these options will refresh within the system and is preferred to refreshing the web browser itself.

Permission Warning

  • Problem: My published dashboard is populating with the same error in each section where data should be populated: “This endpoint requires the datasource… permission”

  • Solutions: Check that the datasources are not old. Most likely, the charts are pulling from outdated material. If this happens, update the charts with new datasources.

  • Problem: I am getting the same permission warning from above, but my colleague can view the chart data.

  • Solutions: If the problem is that one individual can see the data in the charts and another cannot, the second person may need to be granted permission by someone within the permitted category. To do so:

    1. Go to Charts
    2. Select the second small icon of a pencil and paper next to the chart you want to grant access to
    3. Click Edit Table
    4. Click Detail
    5. Click Owners and add the name of the person you want to grant access to and save.

Saving Modified Filters to Dashboard

  • Problem: I modified filters in my draft model and want to save them to my dashboard. The filters are not in the list. In my draft model, a warning stated, “There is no chart definition associated with this component, could it have been deleted? Delete this container and save to remove this message.”
  • Solutions: Go to “Edit Chart.” From there, make sure the “Dashboards” section has the correct dashboard filled in. If it is blank, add the correct dashboard name.

Formatting Numbers: Breaks

  • Problem: My number formatting is broken and out of order.
  • Solutions: The most likely reason for this break is the use of nulls in a numeric column. Using a filter, eliminate all null numeric columns. Try running it again. If that does not work, review the material provided here: http://bl.ocks.org/zanarmstrong/05c1e95bf7aa16c4768e or here: https://github.com/apache-superset/superset-ui/issues. Finally, always feel free to reach out to a PlaidCloud Support team member. This problem is known, and a more permanent solution is being developed.

Formatting Numbers

To round numbers to nearest integer:

  1. Do not use: ,.0f
  2. Instead use: ,d or $,d for dollars

Importing Existing Dashboard

  • Problem: I’m importing an existing dashboard and getting an error on my export.
  • Solutions: First, check whether the dashboard has a “Slug.” To do this, open Edit Dashboard, and the second section is titled Slug. If that section is empty or says “null,” then this is not the problem. Otherwise, if there is any other value in that field, you need to ensure that export JSON has a unique slug value. Change the slug to something unique.

1.2.2 - Using Dashboards

Create and edit data tables within dashboard and explore the data

Description

Usually, members will have access to multiple workspaces and projects. Having this data in multiple spots, however, may not always be desirable. This is why PlaidCloud allows the ability to view all of the accessible data in a single location through the use of dashboards and highly intuitive data exploration. PlaidCloud Dashboards (where the dashboards and data exploration are integrated) provides a rich pallet of visualization and data exploration tools that can operate on virtually any size dataset. This setup also makes it possible to create dashboards and other visualizations that combine information across projects and workspaces, including Ad-hoc analysis.

Editing a Table

The message you receive after creating a new table also directs you to edit the table configuration. While there are more advanced features to edit the configuration, we will start with a limited and more simple portion. To edit table configuration:

  1. Click on the edit icon of the desired table
  2. Click the “List Columns” tab
  3. Arrange the columns as desired
  4. Click “Save”

This allows you to define the way you want to use specific columns of your table when exploring your data.

  • Groupable: If you want users to group metrics by a specific field
  • Filterable: If you need to filter on a specific field
  • Count Distinct: If you want want to get the distinct count of this field
  • Sum: If this is a metric you want to sum
  • Min: If this is a metric you want to gather basic summary statistics for
  • Max: If this is a metric you want to gather basic summary statistics for
  • Is temporal: This should be checked for any date or time fields

Exploring Your Data

To start exploring your data, simply click on the desired table. By default, you’ll be presented with a Table View.

Getting a Data Count

To get a the count of all your records in the table:

  1. Change the filter to “Since”

  2. Enter the desired since filter

    • You can use simple phrases such as “3 years ago”
  3. Enter the desired until filter

    • The upper limit for time defaults is “now”
  4. Select the “Group By” header

  5. Type “Count” into the metrics section

  6. Select “COUNT(*)”

  7. Click the “Query” button

You should then see your results in the table.

If you want to find the count of a specific field or restriction:

  1. Type in the desired restriction(s) in the “Group By” field
  2. Run the query

Restricting Result Number

If you only need a certain number of results, such as the top 10:

  1. Select “Options”
  2. Type in the desired max result count in the “Row Limit” section
  3. Click “Query”

Additional Visualization Tools

To expand abbreviated values to their full length:

  1. Select “Edit Table Config”
  2. Click “List Sql Metric”
  3. Click “Edit Metric”
  4. Click “D3Format”

To edit the unit of measurement:

  1. Select “Edit Table Config”
  2. Click “List Sql Metric”
  3. Click “Edit Metric”
  4. Click “SQL Expression”

To change the chart type:

  1. Scroll to “Chart Options”
  2. Fill in the required fields
  3. Click “Query”

From here you are able to set axis labels, margins, ticks, etc.

1.2.3 - Formatting Numbers and Other Data Types

How to format numbers and other data types to look how you want

Formatting numbers and other data types

There are 2 ways of formatting numbers in PlaidCloud. One way is to transform the values in the tables directly, and a second (more common way) is to format them on display so the values don't lose precision in the table and the user can see the values in a cleaner, more appropriate way.

When I display a value on a dashboard, how do I format it the way I want? The core way to display a value is through a chart object on a dashboard. Charts can be Tables, Big Numbers, Bar Charts, and so on. Each chart object may have a slightly different place or means to display the values. For example, in Tables, you can change the format for each column, and for a Big Number, you can change the format of the number.

To change the format, edit the chart and locate the D3 FORMAT or NUMBER FORMAT field. For a Big Number chart, click on the CUSTOMIZE tab, and you will see NUMBER FORMAT. For a Table, click on the CUSTOMIZE tab, select a number column (displayed with a #) in CUSTOMIZE COLUMN and you will see the D3 FORMAT field.

The default value is Adaptive formatting. This will adjust the format based on the values. But if you want to fix it to a format (i.e. $12.23 or 12,345,678), then you select the format you want from the dropdown or manually type a different value (if the field allows).

D3 Formatting - what is it?

D3 Formatting is a structured, formalized means to display data results in a particular format. For example, in certain situations you may wish to display a large value as 3B (3 billion), formatted as .3s in D3 format, or as 3,001,238,383, formatted as ,d. Another common example is the decision to represent dollar values with 2 decimal precision, or to round that to the nearest dollar $,d or $,.2f to show dollar sign, commas, 2 decimal precision, and a fixed point notation. For a deeper dive into D3, see the following site: GitHub D3

General D3 Format

The general structure of D3 is the following:

[​[fill]align][sign][symbol][0][width][,][.precision][~][type]

The fill can be any character (like a period, x or anything else). If you have a fill character, you then have an align character following it, which must be one of the following:

> - Right-aligned within the available space. (Default behavior). < - Left-aligned within the available space. ^ - Centered within the available space. = - like >, but with any sign and symbol to the left of any padding.

The sign can be: - - blank for zero or positive and a minus sign for negative. (Default behavior.) + - a plus sign for zero or positive and a minus sign for negative. ( - nothing for zero or positive and parentheses for negative. (space) - a space for zero or positive and a minus sign for negative.

The symbol can be: $ - apply currency symbol.

The zero (0) option enables zero-padding; this implicitly sets fill to 0 and align to =.

The width defines the minimum field width; if not specified, then the width will be determined by the content. For example, if you have 8, the width of the field will be 8 characters.

The comma (,) option enables the use commas as separators (i.e. for thousands).

Depending on the type, the precision can either indicate the number of digits that follow the decimal point (types f and %), or the number of significant digits (types ​, g, r, s and p). If the precision is not specified, it defaults to 6 for all types except (none), which defaults to 12.

The tilde ~ option trims insignificant trailing zeros across all format types. This is most commonly used in conjunction with types r, s and %.

types

TypeDescription
ffixed point notation. (common)
ddecimal notation, rounded to integer. (common)
%multiply by 100, and then decimal notation with a percent sign. (common)
geither decimal or exponent notation, rounded to significant digits.
rdecimal notation, rounded to significant digits.
sdecimal notation with an SI prefix, rounded to significant digits.
pmultiply by 100, round to significant digits, and then decimal notation with a percent sign.

Examples

ExpressionInputOutputNotes
,d12345.6712,346rounds the value to the nearest integer, adds commas
,.2f12345.67812,345.68Adds commas, 2 decimal, rounds to the nearest integer
$,.2f12345.67$12,345.67Adds a $ symbol, has commas, 2 digits after the decimal
$,d12345.67$12,346
.<10,151925151,925...have periods to the left of the value, 10 characters wide, with commas
0>10123450000012345pad the value with zeroes to the left, 10 characters wide. This works well for fixing the width of a code value
,.2%13.2151,321.50%have commas, 2 digits to the right of a decimal, convert to percentage, and show a % symbol
x^+$16,.2f123456xx+$123,456.00xxbuffer with "x", centered, have a +/- symbol, $ symbol, 16 characters wide, have commas, 2 digit decimal

1.2.4 - Example Calculated Columns

Examples of calculated column expressions

Description

Data in dashboards can be augmented with calculated columns. Each dataset will contain a section for calculated columns. Calculated columns can be written and modified with PostgreSQL-flavored SQL.

In order to view and edit metrics and calculated expressions, perform the following steps:

  1. Sign into plaidcloud.com and navigate to dashboards
  2. From within visualize.plaidcloud.com, navigate to Data > Datasets
  3. Search for a dataset to view or modify
  4. Modify the dataset by hovering over the edit button beneath Actions

Examples

count

COUNT(*)

min

min("MyColumnName")

max

max("MyColumnName")

coalesce (useful for converting nulls to 0.0, for instance)

coalesce("BaselineCost",0.0)

divide

divide, with a hack for avoiding DIV/0 errors

sum("so_infull")/(count(*)+0.00001)

conditional statement

CASE WHEN "Field_A"= 'Foo' THEN max(coalesce("Value_A",0.0)) - max(coalesce("Value_B",0.0)) END

1.2.5 - Example Metrics

Examples of common metrics

Description

Data in dashboards can be augmented with metrics. Each dataset will contain a section for Metrics. Metrics can be written and modified with PostgreSQL-flavored SQL.

In order to view and edit metrics and calculated expressions, perform the following steps:

  1. Sign into plaidcloud.com and navigate to dashboards
  2. From within visualize.plaidcloud.com, navigate to Data > Datasets
  3. Search for a dataset to view or modify
  4. Modify the dataset by hovering over the edit button beneath Actions

Examples

Calculated columns are typically additional columns made by combining logic and existing columns.

convert a date to text

to_char("week_ending_sol_del_req", 'YYYY-mm-dd')

various SUM examples

SUM(Value)

SUM(-1*"value_usd_mkp") / (0.0001+SUM(-1*"value_usd_base"))

(SUM("Value_USD_VAT")/SUM("Value_USD_HEADER"))*100

sum(delivery_cases) where Material_Type = Gloves

sum("total_cost") / sum("delivery_count")

various case examples

CASE WHEN
SUM("distance_dc_xd") = 0 THEN 0
ELSE
sum("XD")/sum("distance_dc_xd")
END

sum(CASE
WHEN "FUNCTION" = 'OM' THEN "VALUE__FC"
ELSE 0.0
END)

count

count(*)

First and Cast

public.first(cast("PRETAX_SEQ" AS NUMERIC))

Round

round(Sum("GROSS PROFIT"),0)

Concat

concat("GCOA","CC Code")

1.3 - Document Management

Document management allows for the creation and management of account access and document stores for importing data into and exporting data out of PlaidCloud via csv and other file formats. To view the document Management tools, click on the file folder icon/Document in the left menu.

1.3.1 - Adding New Document Accounts

Document Accounts allow you to grant access to manage documents in PlaidCloud for the purposes of data import, export or other actions.

1.3.1.1 - Add AWS S3 Account

How to add an AWS Simple Storage Service (S3) account to Document

AWS S3 Setup

These steps need to be completed within the AWS console

  1. Sign into or create an Amazon Web Services (AWS) account
  2. Go to All services > Storage > S3 in the console
  3. Create a default or test bucket
  4. Go to All Services > Security Identity & Compliance > IAM > Users in the console
  5. Select the Create User button
  6. When prompted, enter a username and select Access Key - Programmatic access only. Select the Next: Permissions button.
  7. Select the option box called Attach existing policies directly
  8. In the filter search box type s3. When the list filters down to S3 related items select AmazonS3FullAccess by checking the box to the left. Select the Next: Tags button.
  9. Skip this step by selecting the Next: Review button
  10. Select the plus icon next to the WasabiFullAccess policy to attach the policy to the user. Select the Next button.
  11. Review the User settings and select Create user
  12. Capture the keys generated for the user by downloading the CSV or copy/pasting the keys somewhere for use later. You will not be able to retrieve this key again so keep track of it. If you need to regenerate a key simply go back to step 5 above.

You should now have everything you need to add your S3 account to PlaidCloud Document.

PlaidCloud Document Setup

  1. Sign into PlaidCloud
  2. Select the workspace that the new Document account will reside
  3. Go to Document > Manage Accounts
  4. Select the + New Account button
  5. Select Amazon S3 as the Service Type
  6. Fill in a name and description
  7. Leave the Start Path blank or add a start path based on an existing Wasabi account hierarchy. See the use of Start Paths for more information.
  8. Select an appropriate Security Model for your use case. Leave it Private if unsure.
  9. Paste the Access Key created in step 12 above into Public Key/User text field under Auth Credentials
  10. Paste the Secret Key created in step 12 above into the Private Key/Password text field under Auth Credentials
  11. Select the Save button and your new Document account is live

1.3.1.2 - Add Google Cloud Storage Account

How to add a Google Cloud Storage (GCS) account to Document

Google Cloud Setup

These steps need to be completed within Google Cloud Platform

  1. Sign into or create a Google Cloud Platform account
  2. Select or create a project where the Google Cloud Storage account will reside
  3. Go to Cloud Storage > Browser in the Google Cloud Platform console
  4. Create a default or test bucket
  5. Go To IAM & Admin > Service Accounts in the Google Cloud Platform console
  6. Select the + Create Service Account button
  7. Complete the service account information and create the account
  8. Find the service account just created in the list of service accounts and select Manage Keys from the context menu on the right
  9. Under the Add Key menu, select Create a Key
  10. When prompted, select JSON format for the key. This will generate the key and automatically download it to your desktop. You will not be able to retrieve this key again so keep track of it. If you need to regenerate a key simply go back to step 8 above.
  11. Go to IAM & Admin > IAM in the Google Cloud Platform console
  12. Find the service account you just created and click on the edit permissions icon
  13. Add Storage Admin and Storage Transfer Admin rights for the service account and save. Note less permissive rights can be assigned but this will impact the functionality available through Document.

You should now have everything you need to add your GCS account to PlaidCloud Document.

PlaidCloud Document Setup

  1. Sign into PlaidCloud
  2. Select the workspace that the new Document account will reside
  3. Go to Document > Manage Accounts
  4. Select the + New Account button
  5. Select Google Cloud Storage as the Service Type
  6. Fill in a name and description
  7. Leave the Start Path blank or add a start path based on an existing GCS account hierarchy. See the use of Start Paths for more information.
  8. Select an appropriate Security Model for your use case. Leave it Private if unsure.
  9. Open the Service Account JSON key file you downloaded in step 10 above and copy the contents
  10. Paste the contents into the Auth Credentials text area
  11. Select the Save button and your new Document account is live

1.3.1.3 - Add Wasabi Hot Storage Account

How to add a Wasabi Hot Storage (Wasabi) account to Document

Wasabi Hot Storage Setup

These steps need to be completed within the Wasabi Hot Storage console

  1. Sign into or create a Wasabi Hot Storage account
  2. Go to Buckets in the console
  3. Create a default or test bucket
  4. Go to Users in the console
  5. Select the Create User button
  6. When prompted, enter a username and select Programmatic (create API key) user
  7. Skip the group assignment. Select the Next button
  8. Select the plus icon next to the WasabiFullAccess policy to attach the policy to the user. Select the Next button.
  9. Review the User settings and select Create User
  10. Capture the keys generated for the user by downloading the CSV or copy/pasting the keys somewhere for use later. You will not be able to retrieve this key again so keep track of it. If you need to regenerate a key simply go back to step 5 above.

You should now have everything you need to add your Wasabi account to PlaidCloud Document.

PlaidCloud Document Setup

  1. Sign into PlaidCloud
  2. Select the workspace that the new Document account will reside
  3. Go to Document > Manage Accounts
  4. Select the + New Account button
  5. Select Wasabi Hot Storage as the Service Type
  6. Fill in a name and description
  7. Leave the Start Path blank or add a start path based on an existing Wasabi account hierarchy. See the use of Start Paths for more information.
  8. Select an appropriate Security Model for your use case. Leave it Private if unsure.
  9. Paste the Access Key created in step 10 above into Public Key/User text field under Auth Credentials
  10. Paste the Secret Key created in step 10 above into the Private Key/Password text field under Auth Credentials
  11. Select the Save button and your new Document account is live

1.3.2 - Account and Access Management

Manage access to document accounts

1.3.2.1 - Control Document Account Access

Set access controls for Document accounts

Four types of access restrictions are available for an account: Private, Workspace, Member Only, and Security Group. The type of restriction set for a user is editable at any time from the account form.

Updating Account Access

  1. Select Document > Manage Accounts within PlaidCloud
  2. Enter the edit mode on the account you wish to change
  3. Select the desired access level restriction located under Security Model
  4. Select the Save button

Restriction Options

All Workspace Members

This access is the simplest since it provides access to all members of the workspace and does not require any additional assignment of members.

Specific Members Only

This access setting requires assignment of each member to an account. This option is particularly useful when combined with the single sign-on option of assigning members based on a list of groups sent with the authentication. However, for workspaces with large numbers of members, this approach can often require more effort than desired, which is where security groups become useful. To choose specific members only:

  1. Select the members icon from the Manage Accounts list
  2. Drag the desired members from the Unassigned Members column on the left, to the Assigned Members column on the right
  3. To remove members, do the opposite
  4. Select the Save button

Specific Security Groups Only

With this option, permission to access an account is granted to specific security groups rather than just individuals. With access restrictions relying on association with a security group or groups, the administration of accounts with much larger user counts becomes much simpler. To edit assigned groups:

  1. Select the groups icon from the Manage Accounts list
  2. Drag the desired groups from the Unassigned Groups column on the left, to the Assigned Groups column on the right
  3. To remove groups, do the opposite
  4. Select the Save button

Remote agents

PlaidLink agents will often use Document accounts to store files or move files among systems. To allow remote agents access to Document accounts, agents MUST have permission granted. This is a security feature to limit unwanted access to potentially sensitive information. To add agents:

  1. Select the agent icon from the Manage Accounts list
  2. Drag desired agents from the Unassigned Agents column on the left, to the Assigned Agents column on the right
  3. To remove agents, do the opposite
  4. Select the Save button

1.3.2.2 - Document Temporary Storage

Use Document's temporary storage option to share files or move them without worrying about cleanup later

Temporary storage may sound counter-intuitive, but real-world use has shown it to be valuable. Typically, permanent storage is used to move large files between members or among other systems, and file cleanup in these storage locations often happens haphazardly, at best. This causes storage to fill with files that shouldn’t be there, eventually requiring manual cleanup.

Temporary storage is perfect for sharing or transferring these types of large files because the files are automatically deleted after 24 hours.

To view temporary storage options

  1. Go To the Document > Temp Share in PlaidCloud

Shared Temporary Storage

Shared temporary storage is viewable by all members of the workspace but is not viewable across workspaces. To access the shared temporary storage area, select the Temp Share menu and click Workspace Temp Share to display a table of files currently in the workspace’s Temp Share area.

To add new files to a shared temporary storage location

  1. Select the Temp Share menu along the top of the main Document page
  2. Click Workspace Temp Share
  3. Click Browse to browse locally stored items
  4. Select the desired file and click Open
  5. Click Upload to upload the file to the temporary storage location

To download existing files from temporary storage

  1. Click on left-most icon, which represents the file type

To manually delete a file

  1. Click the red delete icon to the left of the file name.

Additional details on file management can be found below under “File Explorer”.

Personal Temporary Storage

Personal temporary storage is only viewable by the member to which the temp share belongs. This storage option is beneficial because it’s accessible across workspaces. This functionality makes it easy to move or use files across workspaces if the member is working in multiple workspaces simultaneously.

All members of the workspace can upload files to a members personal share as a dropbox.

To upload a file to another member’s personal share:

  1. Select the Temp Share menu along the top of the main Document page
  2. Select Drop File to Member Temp. A list of members will be displayed.
  3. Click the left-most icon associated with the member of your choosing
  4. Click Browse to browse locally stored items
  5. Select desired file and then click Open
  6. Click Upload to upload the file to the member’s personal storage

Additional details on file uploading can be found below under “File Explorer”.

1.3.2.3 - Managing Document Account Backups

Control how, where, and when Document account backups occur

Document enables the backup of any account on a nightly basis. This feature permits backup across different cloud storage providers and on local systems. Essentially, any account is a valid target for the backup of another account.

The backup process is not limited to a single backup destination. It is possible to have multiple redundant backup locations specified if this is a desired approach. For example, the backup of an internal server to another server may be one location with a second backup sent to Amazon S3 for off-site storage.

By using the prefix feature, it’s possible to have a single backup account contain the backups from multiple other accounts. Each account backup set begins its top level folder(s) with a different prefix, making it easy to distinguish the originating location and the restoration process. For example, if you have three different Document accounts but want to set their backup destination to the same location, using a prefix would allow all three accounts to properly backup without the fear of a name collision.

Reviewing Current Backup Settings

  1. Go to Document > Manage Accounts
  2. Select the backup icon for the account you wish to review

Creating a Backup Set

  1. Go to Document > Manage Accounts
  2. Select the backup icon for the account for which to create a backup
  3. Select the New Backup Set button
  4. Complete the required fields
  5. Select the Create button

The backup process is now scheduled to run nightly (US Time).

Updating a Backup Set

  1. Go to Document > Manage Accounts
  2. Select the backup icon for the account for which to edit a backup
  3. Select the edit icon of the desired backup set
  4. Adjust the desired information
  5. Select the Update button

Deleting a Backup Set

  1. Go to Document > Manage Accounts
  2. Select the backup icon for the account for which to edit a backup
  3. Select the delete icon of the desired backup set
  4. Select the Delete button

1.3.2.4 - Managing Document Account Owners

Add and remove Document account owners

The member who creates the account is assigned as the owner by default. However, Document accounts are designed to support multiple owners. This feature is helpful when a team is responsible for managing account access or when there is member turnover. Adding and removing owners is similar to adding and removing access permissions.

Add or Remove Owners

  1. Go to Document > Management Accounts in PlaidCloud
  2. Select the owners icon in the Manage Accounts list
  3. Drag new owners from the Unassigned Members column on the left to the Assigned Members column on the right
  4. To remove owners, do the opposite
  5. Select the Save button

Because only owners have the ability to view and edit an account, account administration is set up with two levels:

  • The member needs security access to view and manage accounts in general, and
  • The member must be an owner of the account to view, manage, and change settings of accounts

1.3.2.5 - Using Start Paths in Document Accounts

Control where users start navigation in document storage

The account management form allows the configuration of the storage connection information and a start path. A start path allows those who use the account to begin browsing the directory structure further down the directory tree. This particular option is useful when you have multiple teams that need segregated file storage, but you only want one underlying storage service account.

The Start Path option in Document accounts is useful for the following reasons:

  • When controlling access to sub-directories for specific teams and groups
  • Granting access to only one bucket

For example, setting a start path of teams/team_1/ for the Team 1 Document account and teams/team_2 for the Team 2 Document account provides different start points on a shared account. When a member opens the Team 1 Document account they will begin file navigation inside team/team_1. They will not be able to move up the tree and see anything above teams/team_1.

Team 2 would have a similar restriction of not being able to navigate into Team 1's area.

This provides the ability to restrict specific teams to lower levels of the tree while allowing other teams higher level access to the tree while not needing any additional cloud storage complexity like additional buckets or special permissions.

Adding and Updating the Start Path

  1. Go to Document > Manage Accounts
  2. Select the account you wish to edit and enter the edit mode
  3. Add a Start Path in the Start Path text field
  4. Select the save button

Start Path Format

The path always begins with the bucket name followed by the sub-directories.

<my-bucket>/folder1/folder2/

1.3.3 - Using Document Accounts

Upload, download, delete, and view files in Document accounts

Several file operations are available within a Document Account browser. All operations are accessible from a right-click menu within the file browser. The right-click menu provides specific options depending on whether a folder or file is selected.

To open the file explorer:

  1. Click on the folder icon (far left) from the list of private or shared accounts

Opening File Explorer

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore

The various file and folder operations available in the file explorer are detailed below:

  • Folders:
    • uploading new folders
    • creating new folders
    • renaming, deleting, and downloading current folders as ZIPs
  • Files:
    • downloading new files
    • renaming, deleting, and refreshing current files.

Upload a File

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Browse to the desired directory
  4. Right-click and select Upload Here

Download a File

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Browse to the desired directory
  4. Left-click to select the desired file
  5. Right-click and select Download

Rename a File

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Browse to the desired directory
  4. Left-click to select the desired file
  5. Right-click and select Rename

Move a File

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Browse to the desired directory
  4. Left-click to select the desired file
  5. Drag into desired folder
  6. Select Move File

Copy a File

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Browse to the desired directory
  4. Left-click to select the desired file
  5. Right-click and select Copy

Delete a File

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Browse to the desired directory
  4. Left-click to select the desired file
  5. Right-click and select Delete

Create a Folder

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Click “New Top Level Folder”
  4. Enter a folder name of your choosing
  5. Click Create

Rename a Folder

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Browse to the desired directory
  4. Left-click to select the desired folder
  5. Right-click and select Rename

Move a Folder

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Browse to the desired directory
  4. Left-click to select the desired folder
  5. Drag into desired folder
  6. Select Move Folder

Delete a Folder

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Browse to the desired directory
  4. Left-click to select the desired folder
  5. Right-click and select Delete

Download Folder Contents (zip file)

The Download as Zip option is for downloading many files at once. This option will zip (compress) all contents of the selected folder and download the zip file (.zip extension). For easy navigation, the zip file retains the directory structure that exists in the file explorer.

  1. Go to Document > Shared Accounts
  2. Select the folder icon (far left) for the account you wish to explore
  3. Browse to the desired directory
  4. Left-click to select the desired folder
  5. Right-click and select Download as ZIP

1.4 - Expressions

Standard Expressions are basic level operations that can be added across the platform such as finding the max value in a column, extracting the year from a date field, or removing the leading zeroes in a text field.

1.4.1 - Expression Library

A reference library of all expressions that can be used in PlaidCloud

Description

An expression is a basic function that does a conversion, calculation, cast to another data type, or other action on data in a column or in a dashboard chart object. Examples are startswith, max, or current_date. PlaidCloud expressions are based on PostgreSQL. For a more in depth tutorial or reference guide, please see: tutorial

There are three primary areas to apply expressions - metrics and calculated columns in datasets, and chart objects in dashboards.

In order to view and edit metrics and calculated expressions:

  1. Sign into plaidcloud.com and navigate to Dasboards. Select the dashboard you want to work in.
  2. Select Data > Datasets from the menu.
  3. Search for a dataset to view or modify
  4. Hover over the dataset with the cursor and you will see icons in the actions column.
  5. Click the edit icon beneath Actions

Viewing a chart object and adding an expression

You can add expressions to chart objects on a dashboard. For example, if you want to add an expression to a table object (a calculated column), you can:

  1. Open the chart object by opening a dashboard, clicking on the three dot icon, and selecting "View chart in Explore".
  2. Now that you are editing the chart, you can add a new Dimension or Metric, and do a SIMPLE expression, or a CUSTOM SQL expression

Now that you have located where you want to add an expression, you can use the table below as a guide to determining what expression you are looking for.

Category    ExpressionStructureExampleDescription
Conditionalcasecase((expression, truevalue), else_ = falsevalue)case((table.first_name.isnot(None), func.concat(table.first_name, table.last_name)), else_ = table.last_name)
Additional Examples
a switch or a conditional control structure that allows the program to evaluate an expression and perform different actions based on the value of that expression
Conditionalcoalescefunc.coalesce(column1, column2, ...)func.coalesce(table.nickname, table.first_name)
Additional Examples
Returns the first non-null value in a set of columns. In the example, if there is a nickname it returns that, otherwise it returns the first name.
Conversioncastfunc.cast(value, datatype)func.cast(123, Text)
Additional Examples
Converts the value to a specific data type. In the example, it takes an Integer (123) and returns it as a string "123".
Conversionto_charfunc.to_char(timestamp, text)
See More
func.to_char(current_timestamp, 'HH12:MI:S S')
Additional Examples
Converts an object type to a char (text). In the example, it converts a timestamp to text
Conversionto_datefunc.to_date(text, format)func.to_date(table.Created_on, 'DD-MM-YYYY')Convert a text field into a date formatted how you like
Conversionto_numberfunc.to_number(text, format)func.to_number ('12,454.8 -', '99G999D9S')Convert a string to a numeric value
Conversionto_timestampfunc.to_timestamp(text, format)
See More
func.to_timestamp('05 Dec 2000', 'DD Mon YYYY')
Additional Examples
Convert a string to a timestamp
Timeagefunc.age(timestamp, timestamp)age(timestamp ‘2001-04-1 0’, timestamp ‘1957-06-1 3’)=43 years 9 months 27 daysSubtracts the second timestamp from the first one and returns an interval as a result
Timeagefunc.age(timestamp)age(timestamp ‘1957-06-1 3’)=43 years 8 months 3 daysReturns the interval between the current date and the argument provided
Timeclock_timestampfunc.clock_timestamp()func.clock_timestamp()Returns a timestamp for the current date and time which changes during execution
Timecurrent_datefunc.current_date()func.current_date()

get_column(table, 'Created On')>=(func.current_date()-120)
Returns the a date object with the current date
Timecurrent_timefunc.current_time()func.current_time()Returns a time object with the current time and timezone
Timecurrent_timestampfunc.current_timestamp()func.current_timestamp()Returns a timestamp object with the current date and time at the beginning of execution
Timedate_partfunc.date_part(text, timestamp)func.date_part('hour', timestamp '2001-02-1 6 20:38:40')=20Returns the part of the timestamp you are looking for (month, year, etc.)
See more options
Timedate_partfunc.date_part(text, interval)func.date_part('month', interval '2 years 3 months')=3Returns the part of the interval you are looking for (month, year, etc.)
See more options
Timedate_truncfunc.date_trunc(text, timestamp)func.date_trunc('hour', timestamp '2001-02-1 6 20:38:40')=36938.8333333333
Additional Examples
Truncate to specified precision
Timeextractfunc.extract(field from timestamp)func.extract(hour from timestamp '2001-02-1 6 20:38:40')=20Get a field of a timestamp or an interval e.g., year, month, day, etc.
Timeextractfunc.extract(field from interval)func.extract(month from interval '2 years 3 months')=3Get a field of a timestamp or an interval e.g., year, month, day, etc.
Timeisfinitefunc.isfinite(timestamp)func.isfinite(timestamp '2001-02-1 6 21:28:30')=TRUECheck if a date, a timestamp, or an interval is finite or not (not +/-infinity)
Timeisfinitefunc.isfinite(interval)func.isfinite(interval '4 hours')=TRUECheck if a date, a timestamp, or an interval is finite or not (not +/-infinity)
Timejustify_daysfunc.justify_days(interval)func.justify_days(interval '30 days')=1 monthAdjust interval so 30-day time periods are represented as months
Timejustify_hoursfunc.justify_hours(interval)func.justify_hours(interval '24 hours')=1 dayAdjust interval so 24-hour time periods are represented as days
Timejustify_intervalfunc.justify_interval(interval)func.justify_interval(interval '1 mon -1 hour')=29 days 23:00:00Adjust interval using justify_days and justify_hours, with additional sign adjustments
Timenowfunc.now()func.now()Return the date and time with time zone at which the current transaction start
Timestatement_timestampfunc.statement_timestamp()func.statement_timestamp()Return the current date and time at which the current statement executes
Timetimeofdayfunc.timeofday()func.timeofday()Return the current date and time, like clock_timestamp, as a text string
Timetransaction_timestampfunc.transaction_timestamp()func.transaction_timestamp()Return the date and time with time zone at which the current transaction start
General Usage>>table.column > 23Greater Than
General Usage<<table.column < 23Less Than
General Usage>=>=table.column >= 23Greater than or equal to
General Usage<=<=table.column <= 23Less than or equal to
General Usage====table.column == 23Equal to
General Usage!=!=table.column != 23Not Equal to
General Usageand_and_()and_(table.a > 23, table.b == u'blue')
Additional Examples
Creates an AND SQL condition
General Usageany_any_()table.column.any(('red', 'blue', 'yellow'))Applies the SQL ANY() condition to a column
General Usagebetweenbetweentable.column.between(23, 46)

get_column(table, 'LAST_CHANGED_DATE').between({start_date}, {end_date})
Applies the SQL BETWEEN condition
General Usagecontainscontainstable.column.contains('mno')

table.SOURCE_SYSTEM.contains('TEST')
Applies the SQL LIKE '%%'
General Usageendswithendswithtable.column.endswith('xyz')

table.Parent.endswith(':EBITX')

table.PERIOD.endswith("01")
Applies the SQL LIKE '%%'
General UsageFALSEFALSEFALSEFalse, false, FALSE - Alias for Python False
General Usageilikeiliketable.column.ilike('%foobar%')Applies the SQL ILIKE method
General Usagein_in_()table.column.in_((1, 2, 3))

get_column(table, 'Source Country').in_(['CN','SG','BR'])

table.MONTH.in_(['01','02','03','04','05','06','07','08','09'])
Test if values are with a tuple of values
General Usageis_is_table.column.is_(None)

get_column(table, 'Min SafetyStock').is_(None)

get_column(table, 'date_pod').is_(None)
Applies the SQL is the IS for things like IS NULL
General Usageisnotisnottable.column.isnot(None)Applies the SQL is the IS for things like IS NOT NULL
General Usagelikeliketable.column.like('%foobar%')

table.SOURCE_SYSTEM.like('%Adjustments%')
Applies the SQL LIKE method
General Usagenot_not_()not_(and_(table.a > 23, table.b == u'blue'))
Additional Examples
Inverts the condition
General Usagenotilikenotiliketable.column.notilike('%foobar%')Applies the SQL NOT ILIKE method
General Usagenotinnotintable.column.notin((1, 2, 3))

table.LE.notin_(['12345','67890'])
Inverts the IN condition
General Usagenotlikenotliketable.column.notlike('%foobar%')Applies the SQL NOT LIKE method
General UsageNULLNULLNULLNull, null, NULL - Alias for Python None
General Usageor_or_()or_(table.a > 23, table.b == u'blue')
Additional Examples
Creates an OR SQL condition
General Usagestartswithstartswithtable.column.startswith('abc')

get_column(table, 'Zip Code').startswith('9')

get_column(table1, 'GL Account').startswith('CORP')
Applies the SQL LIKE '%'
General UsageTRUETRUETRUETrue, true, TRUE - Alias for Python True
Math Expressions+++2+3=5
Math Expressions-2–3=-1
Math Expressions***2*3=6
Math Expressions///4/2=2
Math Expressionscolumn.opcolumn.op(operator)column.op('%')5%4=1
Math Expressionscolumn.opcolumn.op(operator)column.op('^')2.0^3.0=8
Math Expressionscolumn.opcolumn.op(operator)column.op('!')5!=120
Math Expressionscolumn.opcolumn.op(operator)column.op('!!')!!5=120
Math Expressionscolumn.opcolumn.op(operator)column.op('@')@-5.0=5
Math Expressionscolumn.opcolumn.op(operator)column.op('&')91&15=11
Math Expressionscolumn.opcolumn.op(operator)column.op('#')17##5=20
Math Expressionscolumn.opcolumn.op(operator)column.op('~')~1=-2
Math Expressionscolumn.opcolumn.op(operator)column.op('<<')1<<4=16
Math Expressionscolumn.opcolumn.op(operator)column.op('>>')8>>2=2
Math Functionsabsfunc.abs(x)abs(-17.4)=17.4

func.abs(get_column(table, 'RPA Value'))
absolute value (return type: Same as input)
Math Functionscbrtfunc.cbrt(dp)cbrt(27.0)=3cube root (return type: Big Float)
Math Functionsceilfunc.ceil(dp or numeric)ceil(-42.8)=-42

func.ceil(func.extract('seconds', table.OutlierTime) / 60)
smallest integer not less than argument (return type: Same as input)
Math Functionsceilingfunc.ceiling(dp or numeric)ceiling(-95.3)=-95smallest integer not less than argument (return type: Same as input)
Math Functionsdegreesfunc.degrees(dp)degrees(0.5)=28.6478897565412radians to degrees (return type: Big Float)
Math Functionsexpfunc.exp(dp or numeric)exp(1.0)=2.71828182845905exponential (return type: Same as input)
Math Functionsfloorfunc.floor(dp or numeric)floor(-42.8)=-43largest integer not greater than argument (return type: Same as input)
Math Functionsgreatestfunc.greatest(value …)Select the largest value from a list. NULL values in the list are ignored. The result will be NULL only if all values are NULL. (return type: Same as input)
Math Functionsleastfunc.least(value…)Select the smallest value from a list. NULL values in the list are ignored. The result will be NULL only if all values are NULL. (return type: Same as input)
Math Functionslnfunc.ln(dp or numeric)ln(2.0)=0.693147180559945natural logarithm (return type: Same as input)
Math Functionslogfunc.log(dp or numeric)log(100.0)=2base 10 logarithm (return type: Same as input)
Math Functionslogfunc.log(b numeric, x numeric)log(2.0,64.0)=6logarithm to base b (return type: Numeric)
Math Functionsmodfunc.mod(y, x)mod(9,4)=1remainder of y/x (return type: Same as input)
Math Functionspifunc.pi()pi()=3.14159265358979“π” constant (return type: Big Float)
Math Functionspowerfunc.power(a dp, b dp)power(9.0,3.0)=729a raised to the power of b (return type: Big Float)
Math Functionspowerfunc.power(a numeric, b numeric)power(9.0,3.0)=729a raised to the power of b (return type: Numeric)
Math Functionsradiansfunc.radians(dp)radians(4 5.0)=0.785398163397448degrees to radians (return type: Big Float)
Math Functionsrandomfunc.random()random()random value in the range 0.0 <= x < 1.0 (return type: Big Float)
Math Functionsroundfunc.round(dp or numeric)round(42.4)=42round to nearest integer (return type: Same as input)
Math Functionsroundfunc.round(v numeric, s int)round(42.4382, 2)=42.44

func.round(table.RATE, 5)

func.round((get_column(table, 'Order Quantity')/3), 0)
round to s decimal places (return type: Numeric)
Math Functionssafe_dividefunc.safe_divide(numerator numeric, denominator numeric, divide_by_zero_value)func.safe_divide(get_column(table, 'VALUE__MC'), table.RATE, 0.0)

func.safe_divide(get_column(table, 'Total_Weight'), (table.PickHours + table.BreakHours), 0.00)
Equivalent to the division operator (X / Y), but returns NULL if an error occurs, such as a division by zero error
Math Functionssetseedfunc.setseed(dp)setseed(0 .54823)=1177314959set seed for subsequent random() calls (value between 0 and 1.0) (return type: Integer)
Math Functionssignfunc.sign(dp or numeric)sign(-8.4)=-1sign of the argument (-1, 0, +1) (return type: Same as input)
Math Functionssqrtfunc.sqrt(dp or numeric)sqrt(2.0)=1.4142135623731square root (return type: Same as input)
Math Functionstruncfunc.trunc(dp or numeric)trunc(42. 8)=42truncate toward zero (return type: Same as input)
Math Functionstruncfunc.trunc(v numeric, s int)trunc(42.4382, 2)=42.43truncate to s decimal places (return type: Numeric)
Math Functionswidth_bucketfunc.width_bucket( op numeric, b1 numeric, b2 numeric, count int)width_bucket(5.35, 0.024, 10.06, 5)=3return the bucket to which operand would be assigned in an equidepth histogram with count buckets, in the range b1 to b2 (return type: Integer)
Math Trigacosfunc.acos(x)inverse cosine
Math Trigasinfunc.asin(x)inverse sine
Math Trigatanfunc.atan(x)inverse tangent
Math Trigatan2func.atan2(x,y)inverse tangent of x/y
Math Trigcosfunc.cos(x)cosine
Math Trigcotfunc.cot(x)cotangent
Math Trigsinfunc.sin(x)sine
Math Trigtanfunc.tan(x)tangent
Geometry / PostGISST_3DMakeBoxbox3d ST_3DMakeBox(geometry point3DLowLeftBottom, geometry point3DUpRightTop);ExampleCreates a BOX3D defined by the given 3d point geometries.
Geometry / PostGISST_BdMPolyFromTextgeometry ST_BdMPolyFromText(text WKT, integer srid);ExampleConstruct a MultiPolygon given an arbitrary collection of closed linestrings as a MultiLineString text representation Well-Known text representation.
Geometry / PostGISST_BdPolyFromTextgeometry ST_BdPolyFromText(text WKT, integer srid);ExampleConstruct a Polygon given an arbitrary collection of closed linestrings as a MultiLineString Well-Known text representation.
Geometry / PostGISST_Box2dFromGeoHashbox2d ST_Box2dFromGeoHash(text geohash, integer precision=full_precision_of_geohash);ExampleReturn a BOX2D from a GeoHash string.
Geometry / PostGISST_GeogFromTextgeography ST_GeogFromText(text EWKT);ExampleReturn a specified geography value from Well-Known Text representation or extended (WKT).
Geometry / PostGISST_GeogFromWKBgeography ST_GeogFromWKB(bytea wkb);ExampleCreates a geography instance from a Well-Known Binary geometry representation (WKB) or extended Well Known Binary (EWKB).
Geometry / PostGISST_GeographyFromTextgeography ST_GeographyFromText(text EWKT);ExampleReturn a specified geography value from Well-Known Text representation or extended (WKT).
Geometry / PostGISST_GeomCollFromTextgeometry ST_GeomCollFromText(text WKT, integer srid);ExampleMakes a collection Geometry from collection WKT with the given SRID. If SRID is not given, it defaults to 0.
Geometry / PostGISST_GeometryFromTextgeometry ST_GeometryFromText(text WKT, integer srid);ExampleReturn a specified ST_Geometry value from Well-Known Text representation (WKT). This is an alias name for ST_GeomFromText
Geometry / PostGISST_GeomFromEWKBgeometry ST_GeomFromEWKB(bytea EWKB);ExampleReturn a specified ST_Geometry value from Extended Well-Known Binary representation (EWKB).
Geometry / PostGISST_GeomFromEWKTgeometry ST_GeomFromEWKT(text EWKT);ExampleReturn a specified ST_Geometry value from Extended Well-Known Text representation (EWKT).
Geometry / PostGISST_GeomFromGeoHashgeometry ST_GeomFromGeoHash(text geohash, integer precision=full_precision_of_geohash);ExampleReturn a geometry from a GeoHash string.
Geometry / PostGISST_GeomFromGMLgeometry ST_GeomFromGML(text geomgml, integer srid);ExampleTakes as input GML representation of geometry and outputs a PostGIS geometry object
Geometry / PostGISST_GeomFromGMLgeometry ST_GeomFromGML(text geomgml, integer srid);ExampleTakes as input GML representation of geometry and outputs a PostGIS geometry object
Geometry / PostGISST_GeomFromKMLgeometry ST_GeomFromKML(text geomkml);ExampleTakes as input KML representation of geometry and outputs a PostGIS geometry object
Geometry / PostGISST_GeomFromTextgeometry ST_GeomFromText(text WKT, integer srid);ExampleReturn a specified ST_Geometry value from Well-Known Text representation (WKT).
Geometry / PostGISST_GeomFromWKBgeometry ST_GeomFromWKB(bytea geom, integer srid);ExampleCreates a geometry instance from a Well-Known Binary geometry representation (WKB) and optional SRID.
Geometry / PostGISST_GMLToSQLgeometry ST_GMLToSQL(text geomgml, integer srid);ExampleReturn a specified ST_Geometry value from GML representation. This is an alias name for ST_GeomFromGML
Geometry / PostGISST_LineFromEncodedPolylinegeometry ST_LineFromEncodedPolyline(text polyline, integer precision=5);ExampleCreates a LineString from an Encoded Polyline.
Geometry / PostGISST_LineFromMultiPointgeometry ST_LineFromMultiPoint(geometry aMultiPoint);ExampleCreates a LineString from a MultiPoint geometry.
Geometry / PostGISST_LineFromTextgeometry ST_LineFromText(text WKT, integer srid);ExampleMakes a Geometry from WKT representation with the given SRID. If SRID is not given, it defaults to 0.
Geometry / PostGISST_LineFromWKBgeometry ST_LineFromWKB(bytea WKB, integer srid);ExampleMakes a LINESTRING from WKB with the given SRID
Geometry / PostGISST_LinestringFromWKBgeometry ST_LinestringFromWKB(bytea WKB, integer srid);ExampleMakes a geometry from WKB with the given SRID.
Geometry / PostGISST_MakeBox2Dbox2d ST_MakeBox2D(geometry pointLowLeft, geometry pointUpRight);ExampleCreates a BOX2D defined by the given point geometries.
Geometry / PostGISST_MakeEnvelopegeometry ST_MakeEnvelope(double precision xmin, double precision ymin, double precision xmax, double precision ymax, integer srid=unknown);ExampleCreates a rectangular Polygon formed from the given minimums and maximums. Input values must be in SRS specified by the SRID
Geometry / PostGISST_MakeLinegeometry ST_MakeLine(geometry geom1, geometry geom2);ExampleCreates a Linestring from point or line geometries.
Geometry / PostGISST_MakePointgeometry ST_MakePoint(double precision x, double precision y, double precision z, double precision m);ExampleCreates a 2D,3DZ or 4D point geometry.
Geometry / PostGISST_MakePointMgeometry ST_MakePointM(float x, float y, float m);ExampleCreates a point geometry with an x, y, and m coordinate.
Geometry / PostGISST_MakePolygongeometry ST_MakePolygon(geometry outerlinestring, geometry[] interiorlinestrings);ExampleCreates a Polygon formed by the given shell. Input geometries must be closed LINESTRINGS.
Geometry / PostGISST_MLineFromTextgeometry ST_MLineFromText(text WKT, integer srid);ExampleReturn a specified ST_MultiLineString value from WKT representation.
Geometry / PostGISST_MPointFromTextgeometry ST_MPointFromText(text WKT, integer srid);ExampleMakes a Geometry from WKT with the given SRID. If SRID is not give, it defaults to 0.
Geometry / PostGISST_MPolyFromTextgeometry ST_MPolyFromText(text WKT, integer srid);ExampleMakes a MultiPolygon Geometry from WKT with the given SRID. If SRID is not give, it defaults to 0.
Geometry / PostGISST_Pointgeometry ST_Point(float x_lon, float y_lat);ExampleReturns an ST_Point with the given coordinate values. OGC alias for ST_MakePoint.
Geometry / PostGISST_PointFromGeoHashpoint ST_PointFromGeoHash(text geohash, integer precision=full_precision_of_geohash);ExampleReturn a point from a GeoHash string.
Geometry / PostGISST_PointFromTextgeometry ST_PointFromText(text WKT, integer srid);ExampleMakes a point Geometry from WKT with the given SRID. If SRID is not given, it defaults to unknown.
Geometry / PostGISST_PointFromWKBgeometry ST_GeomFromWKB(bytea geom, integer srid);ExampleMakes a geometry from WKB with the given SRID
Geometry / PostGISST_Polygongeometry ST_Polygon(geometry aLineString, integer srid);ExampleReturns a polygon built from the specified linestring and SRID.
Geometry / PostGISST_PolygonFromTextgeometry ST_PolygonFromText(text WKT, integer srid);ExampleMakes a Geometry from WKT with the given SRID. If SRID is not give, it defaults to 0.
Geometry / PostGISST_WKBToSQLgeometry ST_WKBToSQL(bytea WKB);ExampleReturn a specified ST_Geometry value from Well-Known Binary representation (WKB). This is an alias name for ST_GeomFromWKB that takes no srid
Geometry / PostGISST_WKTToSQLgeometry ST_WKTToSQL(text WKT);ExampleReturn a specified ST_Geometry value from Well-Known Text representation (WKT). This is an alias name for ST_GeomFromText
Text Expressionasciifunc.ascii(string) returns intascii('x')=120

func.ascii(get_column(table, 'TAX_SEGMENT'))
ASCII code of the first byte of the argument
Text Expressionbit_lengthfunc.bit_length(string) returns intbit_length('jose')=32Number of bits in string
Text Expressionbtrimfunc.btrim(string text [, characters text]) returns Textbtrim('xyx johnyyx', 'xy')=johnRemove the longest string consisting only of characters in characters (a space by default) from the start and end of string
Text Expressionchar_lengthfunc.char_length(string) or func.character_length(string) returns intchar_leng th('jose')=4Number of characters in string
Text Expressionchrfunc.chr(int) returns Textchr(65)=ACharacter with the given ASCII code
Text Expressionconcatfunc.concat(string, string) returns Textconcat('Post', 'greSQL')=PostgreSQL

func.concat(table.YEAR,'_', table.PERIOD)
String concatenation
Text Expressionconvertfunc.convert(string text, [src_encoding name,]dest_encoding name)convert('text_in_utf8', 'UTF8', 'LATIN1')=text_in_utf8 represented in ISO 8859-1 encodingConvert string to dest_encoding. The original encoding is specified by src_encoding. If src_encoding is omitted, database encoding is assumed.
Text Expressionconvertfunc.convert(string using conversion_name)convert('PostgreSQL' using iso_8859_1_to_utf8)Change encoding using specified conversion name. Conversions can be defined by CREATE CONVERSION. Also there are some pre-defined conversion names. See here for available conversion names.
Text Expressiondecodefunc.decode(string text, type text)Decode binary data from string previously encoded with encode. Parameter type is same as in encode.
Text Expressioninitcapfunc.initcap(string) returns Textinitcap('hi THOMAS')=Hi ThomasConvert the first letter of each word to uppercase and the rest to lowercase. Words are sequences of alphanumeric characters separated by non-alphanumeric characters
Text Expressionintegerize_truncatefunc.integerize_truncate(string)func.integerize_truncate('30.66')=30Takes a single numeric argument x and returns a numeric vector containing the integers formed by truncating the values in x toward 0
Text Expressionintegerize_roundfunc.integerize_round(string)func.integerize_round('30.66') --> 31Rounds the values in its first argument to the specified number of decimal places
Text Expressionlengthfunc.length(string) returns intlength('jose')=4

func.length(get_column(table, 'arrival_date_actual'))
Number of characters in string
Text Expressionlowerfunc.lower(string) returns Textlower('TOM ')=tomConvert string to lower case
Text Expressionlpadfunc.lpad(string text, length int [, fill text]) returns Textlpad('hi', 5, 'xy')=xyxhi

func.lpad('stringtofillup', 10, 'X')=stringtofi
Fill up the string to length length by prepending the characters fill (a space by default). If the string is already longer than length then it is truncated (on the right)
Text Expressionltrimfunc.ltrim(string text [, characters text]) returns Textltrim('zzz yjohn', 'xyz')=john

func.ltrim('texttotrimplaidcloud', 'texttotrim')=plaidcloud

func.ltrim('plaidcloud')=plaidcloud
Remove the longest string containing only characters from characters (a space by default) from the start of string
Text Expressionmd5func.md5(string) returns Textmd5('abc')=900150983cd24fb0d6963f7d28e17f72Calculates the MD5 hash of string, returning the result in hexadecimal
Text Expressionmetric_multiplyfunc.metric_multiply(string)The Multiply function can take multiple metrics as inputs and multiply the values of the metrics
Text Expressionnumericizefunc.numericize(string)func.numericize('100')=100Attempts to coerce a non-numeric R object to natomic_object() or list of {natomic_object}
Text Expressionoctet_lengthfunc.octet_length(string) returns intoctet_length('jose')=4Number of bytes in string
Text Expressionoverlayfunc.overlay(string placing string from int [forint]) returns Textoverlay('Txxxxas' placing 'hom' from 2 for 4)=ThomasReplace a substring (returns: Text)
Text Expressionpositionfunc.position(substring in string) returns intposition('om' in 'Thomas')=3Location of specified substring
Text Expressionquote_literalfunc.quote_literal(string) returns Textquote_literal('O'Reilly')='O''Reilly'

func.quote_literal('plaidcloud')='plaidcloud'
Return the given string suitably quoted to be used as a string literal in an SQL statement string. Embedded single-quotes and backslashes are properly doubled.
Text Expressionregexp_replacefunc.regexp_replace(string text, pattern text, replacement text [,flags text]) returns Textregexp_replace('Thomas', '.[mN]a.', 'M')=ThM
More Examples
Replace substring matching POSIX regular expression.
Text Expressionrepeatfunc.repeat(string text, number int) returns Textrepeat('Pg', 4)=PgPgPgPgRepeat string the specified number of times
Text Expressionreplacefunc.replace(string text, from text, to text) returns Textreplace('abcdefabc def', 'cd', 'XX')=abXXefabX Xef

func.replace('string_to_replace_with_spaces','_',' ') --> string to replace with spaces
Replace all occurrences in string of substring from with substring to
Text Expressionrpadfunc.rpad(string text, length int [, fill text]) returns Textrpad('hi', 5, 'xy')=hixyxFill up the string to length length by appending the characters fill (a space by default). If the string is already longer than length then it is truncated
Text Expressionrtrimfunc.rtrim(string text [, characters text]) returns Textrtrim('johnxxxx', 'x')=johnRemove the longest string containing only characters from characters (a space by default) from the end of string
Text Expressionsplit_partfunc.split_part(string text, delimiter text, field int) returns Textsplit_part('abc~@~def~@~ghi', '~@~', 2)=def

func.split_part(table.PERIOD, '_', 1)
Split string on delimiter and return the given field (counting from one)
Text Expressionstrposfunc.strpos(string, substring) returns intstrpos('high', 'ig')=2Location of specified substring (same as position(subst ring in string), but note the reversed argument order)
Text Expressionsubstrfunc.substr(string, from [, count]) returns Textsubstr('alphabet', 3, 2)=phExtract substring (same as substring(string from from for count))
Text Expressionsubstringfunc.substring(string [from int] [for int]) returns Textsubstring('Thomas' from 2 for 3)=hom

func.substring(table.ship_to_postal_code, 1, 5)
Extract substring
Text Expressionsubstringfunc.substring(string frompattern) returns Textsubstring( 'Thomas' from '…$')=masExtract substring matching POSIX regular expression
Text Expressionsubstringfunc.substring(string frompatternforescape) returns Textsubstring( 'Thomas' from '%#”o_a#” _' for '#')=omaExtract substring matching SQL regular expression
Text Expressiontext_to_bigintfunc.text_to_bigint(string)This function allows you to convert a string of character values into a large range integer
Text Expressiontext_to_boolfunc.text_to_bool(string)Converts the input text or numeric expression to a Boolean value
Text Expressiontext_to_integerfunc.text_to_integer(string)Convert text to integer
Text Expressiontext_to_numericfunc.text_to_numeric(string)This function converts a character string to a numeric value
Text Expressiontext_to_smallintfunc.text_to_smallint(string)A 2-byte integer data type used in CREATE TABLE and ALTER TABLE statements
Text Expressionto_asciifunc.to_ascii(string text [, encoding text]) returns Textto_ascii('Karel')=KarelConvert string to ASCII from another encoding (only supports conversion from LATIN1, LATIN2, LATIN9, and WIN1250 encodings)
Text Expressionto_hexfunc.to_hex(number int or bigint) returns Textto_hex(2147483647)=7fffffffConvert number to its equivalent hexadecimal representation
Text Expressiontranslatefunc.translate(string text, from text, to text) returns Texttranslate( '12345', '14', 'ax')=a23x5Any character in the string that matches a character in the from set is replaced by the corresponding character in the to set
Text Expressiontrimfunc.trim([leading, trailing, both] [characters] from string) returns Texttrim(both 'x' from 'xTomxx')=TomRemove the longest string containing only the characters (a space by default) from the start/end/both ends of the string
Text Expressionupperfunc.upper(string) returns Textupper('tom')=TOMConvert string to uppercase
Arraysstring_to_arrayfunc.string_to_array(text, delimiter)ExamplesThis function is used to split a string into array elements using supplied delimiter and optional null string
Arraysunnestfunc.unnest(text)ExamplesThis function is used to expand an array to a set of rows
Grouping / Summarizationfirstfunc.first(field)This function returns the value of a specified field in the first record of the result set returned by a query
Grouping / Summarizationlastfunc.last(field)This function returns the value of a specified field in the last record of the result set returned by a query
Grouping / Summarizationmaxfunc.max(field)The MAX function is an aggregate function that returns the maximum value in a set of values
Grouping / Summarizationmedianfunc.median(field)This function will calculate the middle value of a given set of numbers
Grouping / Summarizationstdevfunc.stdev(field)The STDEV function calculates the standard deviation for a sample set of data
Grouping / Summarizationstdev_popfunc.stdev_pop(field)STDDEV_POP computes the population standard deviation and returns the square root of the population variance
Grouping / Summarizationstdev_sampfunc.stdev_samp(field)STDDEV_SAMP() function returns the sample standard deviation of an expression
Grouping / Summarizationvar_popfunc.var_pop(field)VAR_POP returns the population variance of a set of numbers after discarding the nulls in this set
Grouping / Summarizationvar_sampfunc.var_samp(field)VAR_SAMP returns the sample variance of a set of numbers after discarding the nulls in this set
Grouping / Summarizationvariancefunc.variance(field)This function is used to determine how far a set of values is spread out based on a sample of the population
JSONarray_to_jsonfunc.array_to_json(array)Returns the array as JSON. A PostgreSQL multidimensional array becomes a JSON array of arrays.
JSONjson_array_elementsfunc.json_array_elements(json)Expands a JSON array to a set of JSON elements.
JSONjson_eachfunc.json_each(json)Expands the outermost JSON object into a set of key/value pairs
JSONjson_each_textfunc.json_each_text(json)Expands the outermost JSON object into a set of key/value pairs. The returned value will be of type text.
JSONjson_extract_pathfunc.json_extract_path(json, key_1, key_2, ...)Returns JSON object pointed to by path elements. The return value will be a type of JSON.
JSONjson_extract_path_textfunc.json_extract_path_text(json, key_1, key_2, ...)Returns JSON object pointed to by path elements. The return value will be a type of text.
JSONjson_object_keysfunc.json_object_keys(json)Returns set of keys in the JSON object. Only the "outer" object will be displayed.
Window Functionsavgfunc.avg().over(partition_by=field, order_by=field)This function returns the average of the values in a group. It ignores null values
Window Functionscountfunc.count().over(partition_by=field, order_by=field)See ExamplesAn aggregate function that returns the number of rows, or the number of non-NULL rows
Window Functionscume_distfunc.cume_dist().over(partition_by=field, order_by=field)This function calculates the cumulative distribution of a value within a group of values
Window Functionsdense_rankfunc.dense_rank().over(partition_by=field, order_by=field)The DENSE_RANK() is a window function that assigns a rank to each row within a partition of a result set
Window Functionsfirst_valuefunc.first_value(field).over(partition_by=field, order_by=field)See ExamplesFIRST_VALUE is a function that returns the first value in an ordered set of values
Window Functionslagfunc.lag(field, 1).over(partition_by=field, order_by=field)See ExamplesThis function lets you query more than one row in a table at a time without having to join the table to itself
Window Functionslast_valuefunc.last_value(field).over(partition_by=field, order_by=field)See ExamplesThe LAST_VALUE() function is a window function that returns the last value in an ordered partition of a result set
Window Functionsleadfunc.lead(field, 1).over(partition_by=field, order_by=field)This function provides access to more than one row of a table at the same time without a self join
Window Functionsminfunc.min().over(partition_by=field, order_by=field)The min() function returns the item with the lowest value, or the item with the lowest value in an iterable
Window Functionsntilefunc.ntile(4).over(partition_by=field, order_by=field)This is a function that distributes rows of an ordered partition into a specified number of approximately equal groups, or buckets
Window Functionspercent_rankfunc.percent_rank().over(partition_by=field, order_by=field)The PERCENT_RANK() function evaluates the relative standing of a value within a partition of a result set
Window Functionsrankfunc.rank().over(partition_by=field, order_by=field)This is a function that assigns a rank to each row within a partition of a result set
Window Functionsrow_numberfunc.row_number().over(partition_by=field, order_by=field)This function is used to provide consecutive numbering of the rows in the result by the order selected in the OVER clause for each partition
Window Functionssumfunc.sum().over(partition_by=field, order_by=field)See ExamplesThe SUM function adds values. You can add individual values, cell references or ranges or a mix of all three

Data Types

There are a wide variety of standard data types (dtypes) to support your requirements. As datasets become larger, determining smaller size dtypes for value storage can shrink the size of the table and improve performance. The following dtypes are available:

  • Boolean
  • Text
  • Numbers
    • SmallFloat (6 Digits)
    • Float (15 Digits)
    • BigFloat
    • SmallInteger (16 bit) (-32768 to 32767)
    • Integer (32 bit) (-2147483648 to 2147483647)
    • BigInteger (64 bit) (-9223372036854775808 to 9223372036854775807)
    • Numeric
  • Dates and Times
    • Date
    • Timestamp
    • Time Interval

You can convert from one dtype to another using the func.cast() process.

Case Examples

A simple example

This example returns a person's name. It starts off searching to see if the first name column has a value (the "if"). If there is a value, concatenate the first name with the last name and return it (the "then"). If there isn't a first name, then return the last name only (the "else").

case(
        (table.first_name.isnot(None), func.concat(table.first_name, table.last_name)), 
        else_ = table.last_name
    )

A more complex example with multiple conditions

This example returns a price based on quantity. "If" the quantity in the order is more than 100, then give the customer the special price. If it doesn't satisfy the first condition, go to the second. If the quantity is greater than 10 (11-100), then give the customer the bulk price. Otherwise give the customer the regular price.

case( 
        (order_table.qty > 100, item_table.specialprice), 
        (order_table.qty > 10, item_table.bulkprice) , 
        else_=item_table.regularprice
    )

This example returns the first initial of the person's first name. If the user's name is wendy, return W. Otherwise if the user's name is jack, return J. Otherwise return E.

case( 
        (users_table.name == "wendy", "W"), 
        (users_table.name == "jack", "J"), 
        else_='E'
    )

The above may also be written in shorthand as:

case(
    {"wendy": "W", "jack": "J"}, 
    value=users_table.name, 
    else_='E' 
)

Other Examples

In this example is from a Table:Lookup step where we are updating the "dock_final" column when the table1. dock_final value is Null.

case(
    (table1.dock_final == Null, table2.dock_final),
    else_ = table1.dock_final
    )

This example is from a Table:Lookup step where we are updating the "Marketing Channel" column when "Marketing Channel" in table1 is not 'none' or the "Serial Number" contains a '_'.

case(
    (get_column(table1, 'Marketing Channel') != 'none', get_column(table1, 'Marketing Channel')),
    (get_column(table1, 'Serial Number').contains('_'), get_column(table1, 'Marketing Channel')),
    (get_column(table2, 'Marketing Channel') != Null, get_column(table2, 'Marketing Channel')), 
    else_ = 'none'
    )
CASE WHEN "sol_otif_pod_missing" = 1 THEN
'POD is missing.'
ELSE
'POD exists.'
END
CASE WHEN
SUM("distance_dc_xd") = 0 THEN 0
ELSE
sum("XD")/sum("distance_dc_xd")
END
sum(CASE WHEN "dc" = 'ALAB' THEN
("sol_otif_infull" * "sol_otif_pgi_ontime")
ELSE
0.0
END) / sum(CASE WHEN "dc" = 'ALAB' THEN
1.0
ELSE
0.000001
END)

func.cast() type conversions

Analyze ExpressionDescriptionResult
func.cast(123, Text)Integer to Text‘123’
func.cast(‘123’, Integer)Text to Integer123
func.cast(‘78.69’, Float)Text to Float78.69
func.cast(‘78.69’, SmallFloat)Text to Small Float78.69
func.cast(‘78.69’, Integer)Text to Integer (Truncate decimals)78
func.cast(‘78.69’, SmallInteger)Text to Small Integer (Truncate decimals)78
func.cast(‘78.69’, BigInteger)Text to Big Integer (Truncate decimals)78
func.cast(1, Boolean)Integer to BooleanTrue

Other Examples cast(table.transaction_year, Numeric) cast(get_column(table, 'End_Date'),Text)

func.to() Data Type Conversions

Analyze ExpressionReturn TypeDescriptionExample
func.to_char(timestamp, text)textconvert time stamp to text stringto_char(current_timestamp, ‘HH12:MI:S S’)
func.to_char(interval, text)textconvert interval to stringto_char(interval ‘15h 2m 12s’, ‘HH24:MI:S S’)
func.to_char(integer, text)textconvert integer to stringto_char(125, ‘999’)
func.to_char(bigfloat, text)textconvert real/double precision to stringto_char(125.8::real, ‘999D9’)
func.to_char(numeric, text)textconvert numeric to stringto_char(-125.8, ‘999D99S’)
func.to_date(text, text)dateconvert string to datefunc.to_date(table.Created_on, 'DD-MM-YYYY')
func.to_number(text, text)numericconvert string to numericto_number (‘12,454.8 -‘, ‘99G999D9S ‘)
func.to_timestamp(text, text)timestamp with time zoneconvert string to time stampto_timestamp(‘05 Dec 2000’, ‘DD Mon YYYY’)
func.to_timestamp(bigfloat)timestamp with time zoneconvert UNIX epoch to time stampto_timestamp(200120400)

Other Examples

to_char("Sales_Order_w_Status"."WeekName")

func.to_char(func.date_trunc('week', get_column(table, 'date_sol_delivery_required')), 'YYYY-MM-DD')

func.to_date(get_column(table, 'File Creation Date'), 'YYYYMMDD')

result.CreateDate<func.to_date('09022022', 'MMDDYYYY')

to_char("date_delivery", 'YYYY-mm-dd')

Other Date Time Examples

Date Trunc

func.date_trunc('week', get_column(table, 'Date' ))

func.to_char(func.date_trunc('week', get_column(table, 'date_sol_delivery_required')), 'YYYY-MM-DD')

func.to_char(func.date_trunc('week', ((table.Date) - 6)),'MON-DD')

The following patterns can be used to select specific parts of a timestamp or to format date/time as desired.

PatternDescription
HHhour of day (01-12)
HH12hour of day (01-12)
HH24hour of day (00-23)
MIminute (00-59)
SSsecond (00-59)
MSmillisecond (000-999)
USmicrosecond (000000-999999 )
SSSSseconds past midnight (0-86399)
AM or A.M. or PM or P.M.meridian indicator (uppercase)
am or a.m. or pm or p.m.meridian indicator (lowercase)
Y,YYYyear (4 and more digits) with comma
YYYYyear (4 and more digits)
YYYlast 3 digits of year
YYlast 2 digits of year
Ylast digit of year
IYYYISO year (4 and more digits)
IYYlast 3 digits of ISO year
IYlast 2 digits of ISO year
Ilast digits of ISO year
BC or B.C. or AD or A.D.era indicator (uppercase)
bc or b.c. or ad or a.d.era indicator (lowercase)
MONTHfull uppercase month name (blank-padded to 9 chars)
Monthfull mixed-case month name (blank-padded to 9 chars)
monthfull lowercase month name (blank-padded to 9 chars)
MONabbreviated uppercase month name (3 chars)
Monabbreviated mixed-case month name (3 chars)
monabbreviated lowercase month name (3 chars)
MMmonth number (01-12)
DAYfull uppercase day name (blank-padded to 9 chars)
Dayfull mixed-case day name (blank-padded to 9 chars)
dayfull lowercase day name (blank-padded to 9 chars)
DYabbreviated uppercase day name (3 chars)
Dyabbreviated mixed-case day name (3 chars)
dyabbreviated lowercase day name (3 chars)
DDDday of year (001-366)
DDday of month (01-31)
Dday of week (1-7; Sunday is 1)
Wweek in month (1-5) (The first week starts on the first day of the month.)
WWweek number in year (1-53) (The first week starts on the first day of the year.)
IWISO week number of year (The first Thursday of the new year is in week 1.)
CCcentury (2 digits)
JJulian Day (days since January 1, 4712 BC)
Qquarter
RMmonth in Roman numerals (I-XII; I=January) (uppercase)
rmmonth in Roman numerals (i-xii; i=January) (lowercase)
TZtime-zone name (uppercase)
tztime-zone name (lowercase)

And Operator

Example 1

This example checks if the period is any of the three specified dates.

and_(  
    table.color == 'green',  
    table.shape == 'circle',  
    table.price >= 1.25  
)

Example 2

This example is checking if to ensure the origin_plant is not one of the values specified. This is using the != expression.

and_(  
    table.origin_plant != '5013',  
    table.origin_plant != '5026',  
    table.origin_plant != '5120',  
    table.origin_plant != '5287',  
    table.origin_plant != '5161',  
    table.origin_plant != '5192'  
)

Alternatively, for reference, the above check could be written using the not_ and or_ operators like this:

not_(  
    or_(  
        table.origin_plant == '5013',  
        table.origin_plant == '5026',  
        table.origin_plant == '5120',  
        table.origin_plant == '5287',  
        table.origin_plant == '5161',  
        table.origin_plant == '5192'  
    )  
)

Other Examples

and_(table.origin_plant != '5013',table.origin_plant != '5026')

Not Operator

not_(and_(table.VALUE_FC==0.0, table.VALUE_LC==0.0))

not_(or_(get_column(table, 'GL Account').startswith('7'), get_column(table, 'GL Account').startswith('8')))

Or Operator

Example 1 This example checks if the period is any of the three specified dates.

or_(  
    table.period == '2020_10',  
    table.period == '2020_11',  
    table.period == '2020_12'  
)

Example 2 This example is checking if order_reason_Include is null or has the word KEEP as a value.

or_(  
    table.order_reason_Include == 'KEEP',  
    table.order_reason_Include.is_(None)  
)

Coalesce Examples

func.coalesce(table.UOM,  'none', \n)

func.coalesce(get_column(table2, 'TECHNOLOGY_RATE'), 0.0)

func.coalesce(table_beta.adjusted_price, table_alpha.override_price, table_alpha.price) * table_beta.quantity_sold

Regexp Replace Examples

func.regexp_replace('plaidcloud', 'p', 'P') --> Plaidcloud

func.regexp_replace('remove12345alphabets','[[:alpha:]]','','g') --> 12345

func.regexp_replace('remove12345digits','[[:digit:]]','','g') --> removedigits

First Value Examples

This is an example of using the 'first_value()' capability to calculate the running time of the time series data where each event is on a distinct row.

This assumes you have a table of time series data that looks like this:

locationemployeetimestamp
Building AJohn Doe2022-01-05 15:34:31
Building AJohn Doe2022-01-05 15:44:31
Building AJohn Doe2022-01-05 15:46:41
table.timestamp - func.first_value(table.timestamp, 1).over(partition_by=[table.location, table.employee], order_by=table.timestamp)

Adding the expression above to an Interval column called 'run_time' would result in an output table like this:

locationemployeetimestamprun_time
Building AJohn Doe2022-01-05 15:34:3100:00:00
Building AJohn Doe2022-01-05 15:44:3100:10:00
Building AJohn Doe2022-01-05 15:46:4100:12:10

Lag Examples

This is an example of using the 'lag()' capability to calculate the time interval in time series data where each event is on a distinct row.

This assumes you have a table of time series data that looks like this:

locationemployeetimestamp
Building AJohn Doe2022-01-05 15:34:31
Building AJohn Doe2022-01-05 15:44:31
Building AJohn Doe2022-01-05 15:46:41
table.timestamp - func.lag(table.timestamp, 1).over(partition_by=[table.location, table.employee], order_by=table.timestamp)

Adding the expression above to an Interval column called 'delta' would result in an output table like this:

locationemployeetimestampdelta
Building AJohn Doe2022-01-05 15:34:31null
Building AJohn Doe2022-01-05 15:44:3100:10:00
Building AJohn Doe2022-01-05 15:46:4100:02:10

Last Value Examples

This is an example of using the 'last_value()' capability to calculate the time remaining in time series data where each event is on a distinct row.

This assumes you have a table of time series data that looks like this:

locationemployeetimestamp
Building AJohn Doe2022-01-05 15:34:31
Building AJohn Doe2022-01-05 15:44:31
Building AJohn Doe2022-01-05 15:46:41
func.last_value(table.timestamp, 1).over(partition_by=[table.location, table.employee], order_by=table.timestamp) - table.timestamp

Adding the expression above to an Interval column called 'remaining' would result in an output table like this:

locationemployeetimestampremaining
Building AJohn Doe2022-01-05 15:34:3100:12:10
Building AJohn Doe2022-01-05 15:44:3100:02:10
Building AJohn Doe2022-01-05 15:46:4100:00:00

Sum Examples

(sum("sol_otif_infull" * "sol_otif_pgi_ontime")) / (count(*) + 0.000001)

sum("sol_otif_qty_filled") / (sum("sol_otif_qty_requested") + 0.000001)

Count Examples

sum("RW")/COUNT(DISTINCT "ship_to_customer")

(sum("sol_otif_infull" * "sol_otif_pgi_ontime")) / (count(*) + 0.000001)

Array Examples

In the examples below, we will use the following table called contacts with the phones column defined with an array of text.

CREATE TABLE contacts (
  id SERIAL PRIMARY KEY, 
  name VARCHAR (100), 
  phones TEXT []
);

The phones column is a one-dimensional array that holds various phone numbers that a contact may have.

To define multiple dimensional array, you add the square brackets.

For example, you can define a two-dimensional array as follows:

column_name data_type [][]

An example of inserting data into that table

INSERT INTO contacts (name, phones)
VALUES('John Doe',ARRAY [ '(408)-589-5846','(408)-589-5555' ]);

or

INSERT INTO contacts (name, phones)
VALUES('Lily Bush','{"(408)-589-5841"}'),
      ('William Gate','{"(408)-589-5842","(408)-589-5843"}');

Array unnest

The unnest() function expands an array to a list of rows. For example, the following query expands all phone numbers of the phones array.

SELECT 
  name, 
  unnest(phones) 
FROM 
  contacts;

Output:

nameunnest
John Doe(408)-589-5846
John Doe(408)-589-5555
Lily Bush(408)-589-5841
William Gate(408)-589-5843

STRING_TO_ARRAY() function

This function is used to split a string into array elements using supplied delimiter and optional null string.

Syntax:
string_to_array(text, text [, text])

Return Type:
text[]

Example:

SELECT string_to_array('xx~^~yy~^~zz', '~^~', 'yy');

Output:
{xx,NULL,zz}

1.4.2 - MADLib Expressions (ML)

Apache MADlib is an open-source library for scalable in-database analytics. It provides data-parallel implementations of mathematical, statistical, graph and machine learning methods for structured and unstructured data.

1.4.2.1 - Data Type Transformations

1.4.2.1.1 - Array Operations

Provides support functions enabling fast array operations

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.array_add(array1,array2);

In PlaidCloud Expressions & Filters

func.madlib.array_add(array1,array2)

External References

Apache MADLib Official Documentation for these methods can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.1.2 - Encoding Categorical Variables

Coding categorical variables into one-hot, dummy, effects, orthogonal, and Helmert

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.encode_categorical_variables ('abalone', 'abalone_out', 'height::TEXT');

In PlaidCloud Expressions & Filters

func.madlib.encode_categorical_variables ('abalone', 'abalone_out', 'height::TEXT')

External References

Apache MADLib Official Documentation for these methods can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.1.3 - Low-Rank Matrix Factorization

Represent an incomplete matrix using a low-rank approximation

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.lmf_igd_run('lmf_model', 'lmf_data', 'row', 'col', 'val', 999, 10000, 3, 0.1, 2, 10, 1e-9);

In PlaidCloud Expressions & Filters

func.madlib.lmf_igd_run('lmf_model', 'lmf_data', 'row', 'col', 'val', 999, 10000, 3, 0.1, 2, 10, 1e-9)

External References

Apache MADLib Official Documentation for these methods can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.1.4 - Matrix Operations

Provides basic matrix operations for matrices that are too big to fit in memory

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.matrix_trans('"mat_B"', 'row=row_id, val=vector', 'mat_r');

In PlaidCloud Expressions & Filters

func.madlib.matrix_trans('"mat_B"', 'row=row_id, val=vector', 'mat_r')

External References

Apache MADLib Official Documentation for these methods can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.1.5 - Norms and Distance Functions

Useful utility functions for basic linear algebra operations

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.squared_dist_norm2(a, b);

In PlaidCloud Expressions & Filters

func.madlib.squared_dist_norm2(a, b)

External References

Apache MADLib Official Documentation for these methods can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.1.6 - Path

Performs regular pattern matching over a sequence of rows

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.path('eventlog', 'path_output', 'session_id', 'event_timestamp ASC', 'buy:=page=''CHECKOUT''', '(buy)', 'sum(revenue) as checkout_rev', TRUE);

In PlaidCloud Expressions & Filters

func.madlib.path('eventlog', 'path_output', 'session_id', 'event_timestamp ASC', "buy:=page='CHECKOUT'", '(buy)', 'sum(revenue) as checkout_rev', True)

External References

Apache MADLib Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.1.7 - Pivot

Perform basic OLAP type operations on data

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.pivot('pivset_ext', 'pivout', 'id', 'piv', 'val', 'sum');

In PlaidCloud Expressions & Filters

func.madlib.pivot('pivset_ext', 'pivout', 'id', 'piv', 'val', 'sum')

External References

Apache MADLib Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.1.8 - Sessionize

Performs time-oriented session reconstruction on a data set comprising a sequence of events

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.sessionize('eventlog', 'sessionize_output_view', 'user_id', 'event_timestamp', '0:30:0');

In PlaidCloud Expressions & Filters

func.madlib.sessionize('eventlog', 'sessionize_output_view', 'user_id', 'event_timestamp', '0:30:0')

External References

Apache MADLib Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.1.9 - Single Value Decomposition

Factorization of a real or complex matrix, with many useful applications in signal processing and statistics

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.matrix_sparsify('mat', 'row=row_id, val=row_vec', 'mat_sparse', 'row=row_id, col=col_id, val=value');

In PlaidCloud Expressions & Filters

func.madlib.matrix_sparsify('mat', 'row=row_id, val=row_vec', 'mat_sparse', 'row=row_id, col=col_id, val=value')

External References

Apache MADLib Official Documentation for these methods can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.1.10 - Sparse Vectors

Provides compressed storage of vectors that have many duplicate elements

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.gen_doc_svecs('svec_output', 'dictionary_table', 'id', 'term', 'documents_table', 'id', 'term', 'count');

In PlaidCloud Expressions & Filters

func.madlib.gen_doc_svecs('svec_output', 'dictionary_table', 'id', 'term', 'documents_table', 'id', 'term', 'count')

External References

Apache MADLib Official Documentation for these methods can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.1.11 - Stemming

Provides a basic stemming operation for text input using the Porter Stemming Algorithm

PlaidCloud expressions and filters provide use of most non-administrative Apache MADLib methods. Apache MADLib methods are accessed by prefixing the standard method name with func.madlib..

In SQL

madlib.stem_token(word)

In PlaidCloud Expressions & Filters

func.madlib.stem_token(word)

External References

Apache MADLib Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the Apache MADLib documentation.

1.4.2.2 - Deep Learning

Content coming soon

1.4.2.3 - Machine Learning

Make processing your database tables easier with 'MADlib'

Analyze utilizes the expansive and powerful MADLib extension. MADlib helps you take advantage of the investments you’ve made in your database while using its computational power rather than extracting the data into an external system.

Additional documentation on how to use machine learning is coming soon.

1.4.3 - PostGIS Expressions (Geospatial)

PostGIS or Geospacial Expressions are basic level functions that can be applied to geographic data, such as spatial relationships, distance, clustering, and other geometric functions.

1.4.3.1 - Affine Transformations

1.4.3.1.1 - func.ST_TransScale

Translates the geometry using the deltaX and deltaY args, then scales it using the XFactor, YFactor args, working in 2D only

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_TransScale(geometry geomA, float deltaX, float deltaY, float XFactor, float YFactor);

PlaidCloud

func.ST_TransScale(geometry geomA, float deltaX, float deltaY, float XFactor, float YFactor)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.1.2 - func.ST_Translate

Returns a new geometry whose coordinates are translated delta x,delta y,delta z units

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Translate(geometry g1, float deltax, float deltay);

PlaidCloud

func.ST_Translate(geometry g1, float deltax, float deltay)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.1.3 - func.ST_Scale

Scales the geometry to a new size by multiplying the ordinates with the corresponding factor parameters

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Scale(geometry geom, geometry factor);

PlaidCloud

func.ST_Scale(geometry geom, geometry factor)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.1.4 - func.ST_RotateZ

Rotates a geometry geomA - rotRadians about the Z axis

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_RotateZ(geometry geomA, float rotRadians);

PlaidCloud

func.ST_RotateZ(geometry geomA, float rotRadians)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.1.5 - func.ST_RotateY

Rotates a geometry geomA - rotRadians about the y axis

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_RotateY(geometry geomA, float rotRadians);

PlaidCloud

func.ST_RotateY(geometry geomA, float rotRadians)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.1.6 - func.ST_RotateX

Rotates a geometry geomA - rotRadians about the X axis

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_RotateX(geometry geomA, float rotRadians);

PlaidCloud

func.ST_RotateX(geometry geomA, float rotRadians)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.1.7 - func.ST_Rotate

Rotates geometry rotRadians counter-clockwise about the origin point

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Rotate(geometry geomA, float rotRadians);

PlaidCloud

func.ST_Rotate(geometry geomA, float rotRadians)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.1.8 - func.ST_Affine

Applies a 3D affine transformation to the geometry to do things like translate, rotate, scale in one step

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Affine(geometry geomA, float a, float b, float d, float e, float xoff, float yoff);

PlaidCloud

func.ST_Affine(geometry geomA, float a, float b, float d, float e, float xoff, float yoff)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.2 - Bounding Box Functions

1.4.3.2.1 - func.ST_ZMin

Returns the Z minima of a 2D or 3D bounding box or a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ZMin(box3d aGeomorBox2DorBox3D);

PlaidCloud

func.ST_ZMin(box3d aGeomorBox2DorBox3D)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.2 - func.ST_ZMax

Returns the Z maxima of a 2D or 3D bounding box or a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ZMax(box3d aGeomorBox2DorBox3D);

PlaidCloud

func.ST_ZMax(box3d aGeomorBox2DorBox3D)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.3 - func.ST_YMin

Returns the Y minima of a 2D or 3D bounding box or a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_YMin(box3d aGeomorBox2DorBox3D);

PlaidCloud

func.ST_YMin(box3d aGeomorBox2DorBox3D)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.4 - func.ST_YMax

Returns the Y maxima of a 2D or 3D bounding box or a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_YMax(box3d aGeomorBox2DorBox3D);

PlaidCloud

func.ST_YMax(box3d aGeomorBox2DorBox3D)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.5 - func.ST_XMin

Returns the X minima of a 2D or 3D bounding box or a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_XMin(box3d aGeomorBox2DorBox3D);

PlaidCloud

func.ST_XMin(box3d aGeomorBox2DorBox3D)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.6 - func.ST_XMax

Returns the X maxima of a 2D or 3D bounding box or a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_XMax(box3d aGeomorBox2DorBox3D);

PlaidCloud

func.ST_XMax(box3d aGeomorBox2DorBox3D)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.7 - func.ST_3DMakeBox

Creates a BOX3D defined by the given two 3D point geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DMakeBox(geometry point3DLowLeftBottom, geometry point3DUpRightTop);

PlaidCloud

func.ST_3DMakeBox(geometry point3DLowLeftBottom, geometry point3DUpRightTop)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.8 - func.ST_MakeBox2D

Creates a BOX2D defined by the given two point geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MakeBox2D(geometry pointLowLeft, geometry pointUpRight);

PlaidCloud

func.ST_MakeBox2D(geometry pointLowLeft, geometry pointUpRight)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.9 - func.ST_3DExtent

ST_3DExtent returns a box3d (includes Z coordinate) bounding box that encloses a set of geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DExtent(geometry set geomfield);

PlaidCloud

func.ST_3DExtent(geometry set geomfield)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.10 - func.ST_Extent

ST_Extent returns a bounding box that encloses a set of geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Extent(geometry set geomfield);

PlaidCloud

func.ST_Extent(geometry set geomfield)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.11 - func.ST_Expand

This function returns a bounding box expanded from the bounding box of the input

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Expand(geometry geom, float units_to_expand);

PlaidCloud

func.ST_Expand(geometry geom, float units_to_expand)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.12 - func.ST_EstimatedExtent

Return the 'estimated' extent of the given spatial table

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_EstimatedExtent(text table_name, text geocolumn_name);

PlaidCloud

func.ST_EstimatedExtent(text table_name, text geocolumn_name)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.13 - func.Box3D

Returns a BOX3D representing the 3D extent of the geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

Box3D(geometry geomA);

PlaidCloud

func.Box3D(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.2.14 - func.Box2D

Returns a BOX2D representing the 2D extent of the geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

Box2D(geometry geomA);

PlaidCloud

func.Box2D(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.3 - Clustering Functions

1.4.3.3.1 - func.ST_ClusterWithin

Aggregate function that returns an array of GeometryCollections that represent a set of geometries separated by a specified distance

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ClusterWithin(geometry set g, float8 distance);

PlaidCloud

func.ST_ClusterWithin(geometry set g, float8 distance)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.3.2 - func.ST_ClusterIntersecting

ClusterIntersecting is an aggregate function that returns an array of GeometryCollections that represent an interconnected set of geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ClusterIntersecting(geometry set g);

PlaidCloud

func.ST_ClusterIntersecting(geometry set g)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4 - Geometry Accessors

1.4.3.4.1 - func.ST_Zmflag

Returns a code indicating the ZM coordinate dimension of a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Zmflag(geometry geomA);

PlaidCloud

func.ST_Zmflag(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.2 - func.ST_Z

Return the Z coordinate of the point, or NULL if not available. Input must be a point

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Z(geometry a_point);

PlaidCloud

func.ST_Z(geometry a_point)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.3 - func.ST_Y

Return the Y coordinate of the point, or NULL if not available. Input must be a point

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Y(geometry a_point);

PlaidCloud

func.ST_Y(geometry a_point)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.4 - func.ST_X

Return the X coordinate of the point, or NULL if not available. Input must be a point

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_X(geometry a_point);

PlaidCloud

func.ST_X(geometry a_point)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.5 - func.ST_Summary

Returns a text summary of the contents of the geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Summary(geometry g);

PlaidCloud

func.ST_Summary(geometry g)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.6 - func.ST_StartPoint

Returns the first point of a LINESTRING or CIRCULARLINESTRING geometry as a POINT

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_StartPoint(geometry geomA);

PlaidCloud

ST_StartPoint(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.7 - func.ST_PointN

Return the Nth point in a single linestring or circular linestring in the geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_PointN(geometry a_linestring, integer n);

PlaidCloud

func.ST_PointN(geometry a_linestring, integer n)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.8 - func.ST_PatchN

Returns the 1-based Nth geometry (face) if the geometry is a POLYHEDRALSURFACE or POLYHEDRALSURFACEM

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_PatchN(geometry geomA, integer n);

PlaidCloud

ST_PatchN(geometry geomA, integer n)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.9 - func.ST_NumPoints

Return the number of points in an ST_LineString or ST_CircularString value

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_NumPoints(geometry g1);

PlaidCloud

func.ST_NumPoints(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.10 - func.ST_NumPatches

Return the number of faces on a Polyhedral Surface

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_NumPatches(geometry g1);

PlaidCloud

func.ST_NumPatches(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.11 - func.ST_NumInteriorRing

This function returns the number of interior rings of a polygon geom. It returns NULL if the geom is not a polygon.

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_NumInteriorRing(geometry a_polygon);

PlaidCloud

ST_NumInteriorRing(geometry a_polygon)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.12 - func.ST_NumInteriorRings

Return the number of interior rings of a polygon geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_NumInteriorRings(geometry a_polygon);

PlaidCloud

func.ST_NumInteriorRings(geometry a_polygon)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.13 - func.ST_NumGeometries

Returns the number of Geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_NumGeometries(geometry geom);

PlaidCloud

func.ST_NumGeometries(geometry geom)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.14 - func.ST_NRings

If the geometry is a polygon or multi-polygon returns the number of rings

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_NRings(geometry geomA);

PlaidCloud

func.ST_NRings(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.15 - func.ST_NPoints

Return the number of points in a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_NPoints(geometry g1);

PlaidCloud

func.ST_NPoints(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.16 - func.ST_NDims

Returns the coordinate dimension of the geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_NDims(geometry g1);

PlaidCloud

func.ST_NDims(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.17 - func.ST_MemSize

Returns the amount of memory space (in bytes) the geometry takes

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MemSize(geometry geomA);

PlaidCloud

func.ST_MemSize(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.18 - func.ST_M

Return the M coordinate of a Point, or NULL if not available. Input must be a Point

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_M(geometry a_point);

PlaidCloud

func.ST_M(geometry a_point)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.19 - func.ST_IsSimple

Returns true if this Geometry has no anomalous geometric points

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_IsSimple(geometry geomA);

PlaidCloud

func.ST_IsSimple(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.20 - func.ST_IsRing

Returns TRUE if this LINESTRING is both ST_IsClosed and ST_IsSimple (does not self intersect)

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_IsRing(geometry g);

PlaidCloud

func.ST_IsRing(geometry g)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.21 - func.ST_IsCollection

Returns TRUE if the geometry type of the argument is a geometry collection type

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_IsCollection(geometry g);

PlaidCloud

func.ST_IsCollection(geometry g)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.22 - func.ST_IsClosed

Returns TRUE if the LINESTRING's start and end points are coincident

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_IsClosed(geometry g);

PlaidCloud

func.ST_IsClosed(geometry g)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.23 - func.ST_InteriorRingN

Returns the Nth interior linestring ring of the polygon geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_InteriorRingN(geometry a_polygon, integer n);

PlaidCloud

func.ST_InteriorRingN(geometry a_polygon, integer n)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.24 - func.ST_HasArc

Returns true if a geometry or geometry collection contains a circular string

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_HasArc(geometry geomA);

PlaidCloud

func.ST_HasArc(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.25 - func.ST_GeometryN

This section describes functions and operators for examining and manipulating string values

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeometryN(geometry geomA, integer n);

PlaidCloud

func.ST_GeometryN(geometry geomA, integer n)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.26 - func.ST_ExteriorRing

Returns a line string representing the exterior ring of the POLYGON geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ExteriorRing(geometry a_polygon);

PlaidCloud

func.ST_ExteriorRing(geometry a_polygon)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.27 - func.ST_Envelope

Returns the double-precision (float8) minimum bounding box for the supplied geometry, as a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Envelope(geometry g1);

PlaidCloud

func.ST_Envelope(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.28 - func.ST_BoundingDiagonal

Returns the diagonal of the supplied geometry's bounding box as a LineString

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_BoundingDiagonal(geometry geom, boolean fits=false);

PlaidCloud

func.ST_BoundingDiagonal(geometry geom, boolean fits=False)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.29 - func.ST_EndPoint

Returns the last point of a LINESTRING as a POINT. Returns NULL if the input is not a LINESTRING

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_EndPoint(geometry g);

PlaidCloud

func.ST_EndPoint(geometry g)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.30 - func.ST_DumpRings

This is a set-returning function (SRF). It returns a set of geometry_dump rows, as an integer and a geometry, aliased "path" and "geom"

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_DumpRings(geometry a_polygon);

PlaidCloud

func.ST_DumpRings(geometry a_polygon)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.31 - func.ST_DumpPoints

This set-returning function (SRF) returns a set of geometry_dump rows formed by a geometry (geom) and an array of integers (path)

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_DumpPoints(geometry geom);

PlaidCloud

func.ST_DumpPoints(geometry geom)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.32 - func.ST_Dump

This is a set-returning function (SRF). It returns a set of geometry_dump rows, formed by a geometry (geom) and an array of integers (path)

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Dump(geometry g1);

PlaidCloud

func.ST_Dump(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.33 - func.ST_Dimension

Return the topological dimension of this Geometry object, which must be less than or equal to the coordinate dimension

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Dimension(geometry g);

PlaidCloud

func.ST_Dimension(geometry g)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.34 - func.ST_CoordDim

Return the coordinate dimension of the ST_Geometry value

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_CoordDim(geometry geomA);

PlaidCloud

func.ST_CoordDim(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.35 - func.ST_Boundary

Returns the closure of the combinatorial boundary of this Geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Boundary(geometry geomA);

PlaidCloud

func.ST_Boundary(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.36 - func.ST_GeometryType

Returns the type of the geometry as a string. 'ST_LineString', 'ST_Polygon','ST_MultiPolygon'

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeometryType(geometry g1);

PlaidCloud

func.ST_GeometryType(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.4.37 - func.ST_IsEmpty

Returns true if this Geometry is an empty geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_IsEmpty(geometry geomA);

PlaidCloud

func.ST_IsEmpty(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.5 - Geometry Constructors

1.4.3.5.1 - func.ST_Collect

Collects geometries into a geometry collection

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Collect(geometry g1, geometry g2)

PlaidCloud

func.ST_Collect(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.5.2 - func.ST_LineFromMultiPoint

Creates a LineString from a MultiPoint geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LineFromMultiPoint(geometry aMultiPoint); PlaidCloud 

PlaidCloud

func.ST_LineFromMultiPoint(geometry aMultiPoint) 

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.5.3 - func.ST_MakeEnvelope

Creates a rectangular Polygon from the minimum and maximum values for X and Y

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MakeEnvelope(float xmin, float ymin, float xmax, float ymax, integer srid=unknown);

PlaidCloud

func.ST_MakeEnvelope(float xmin, float ymin, float xmax, float ymax, integer srid=unknown); 

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.5.4 - func.ST_MakeLine

Creates a LineString containing the points of Point, MultiPoint, or LineString geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MakeLine(geometry geom1, geometry geom2); 

PlaidCloud

func.ST_MakeLine(geometry geom1, geometry geom2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.5.5 - func.ST_MakePoint

Creates a 2D, 3D Z or 4D ZM Point geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MakePoint(float x, float y, float z, float m);

PlaidCloud

func.ST_MakePoint(float x, float y, float z, float m)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.5.6 - func.ST_MakePointM

Creates a point with X, Y and M (measure) coordinates

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MakePointM(float x, float y, float m);

PlaidCloud

func.ST_MakePointM(float x, float y, float m)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.5.7 - func.ST_MakePolygon

Creates a Polygon formed by the given shell and optional array of holes

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MakePolygon(geometry linestring);

PlaidCloud

func.ST_MakePolygon(geometry linestring)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.5.8 - func.ST_Point

Returns a Point with the given X and Y coordinate values

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Point(float x, float y);

PlaidCloud

func.ST_Point(float x, float y)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.5.9 - func.ST_Polygon

Returns a polygon built from the given LineString and sets the spatial reference system from the SRID

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Polygon(geometry lineString, integer srid);

PlaidCloud

func.ST_Polygon(geometry lineString, integer srid)

External References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.6 - Geometry Editors

1.4.3.6.1 - func.ST_SwapOrdinates

Returns a version of the given geometry with given ordinates swapped

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_SwapOrdinates(geometry geom, cstring ords);

PlaidCloud

func.ST_SwapOrdinates(geometry geom, cstring ords)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.2 - func.ST_Snap

Snaps the vertices and segments of a geometry to another Geometry's vertices

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Snap(geometry input, geometry reference, float tolerance);

PlaidCloud

func.ST_Snap(geometry input, geometry reference, float tolerance)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.3 - func.ST_SnapToGrid

Snap all points of the input geometry to the grid defined by its origin and cell size

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_SnapToGrid(geometry geomA, float originX, float originY, float sizeX, float sizeY);

PlaidCloud

func.ST_SnapToGrid(geometry geomA, float originX, float originY, float sizeX, float sizeY)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.4 - func.ST_ShiftLongitude

Reads every point/vertex in a geometry, and if the longitude coordinate is <0, adds 360 to it

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ShiftLongitude(geometry geom);

PlaidCloud

func.ST_ShiftLongitude(geometry geom)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.5 - func.ST_SetPoint

Replace point N of linestring with given point

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_SetPoint(geometry linestring, integer zerobasedposition, geometry point);

PlaidCloud

func.ST_SetPoint(geometry linestring, integer zerobasedposition,   
geometry point)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.6 - func.ST_Segmentize

Returns a modified geometry having no segment longer than the given max_segment_length

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Segmentize(geometry geom, float max_segment_length);

PlaidCloud

func.ST_Segmentize(geometry geom, float max_segment_length)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.7 - func.ST_Reverse

Can be used on any geometry and reverses the order of the vertexes

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Reverse(geometry g1);

PlaidCloud

func.ST_Reverse(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.8 - func.ST_RemoveRepeatedPoints

Returns a version of the given geometry with duplicated points removed

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_RemoveRepeatedPoints(geometry geom, float8 tolerance);

PlaidCloud

func.ST_RemoveRepeatedPoints(geometry geom, float8 tolerance)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.9 - func.ST_RemovePoint

Remove a point from a linestring, given its 0-based index

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_RemovePoint(geometry linestring, integer offset);

PlaidCloud

func.ST_RemovePoint(geometry linestring, integer offset)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.10 - func.ST_Multi

Returns the geometry as a MULTI* geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Multi(geometry g1);

PlaidCloud

func.ST_Multi(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.11 - func.ST_LineToCurve

Converts plain LINESTRING/POLYGON to CIRCULAR STRINGs and Curved Polygons

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LineToCurve(geometry geomANoncircular);

PlaidCloud

func.ST_LineToCurve(geometry geomANoncircular)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.12 - func.ST_LineMerge

Returns a (set of) LineString(s) formed by sewing together the constituent line work of a MULTILINESTRING

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LineMerge(geometry amultilinestring);

PlaidCloud

func.ST_LineMerge(geometry amultilinestring)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.13 - func.ST_ForceCurve

Turns a geometry into its curved representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ForceCurve(geometry g);

PlaidCloud

func.ST_ForceCurve(geometry g)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.14 - func.ST_ForceRHR

Forces the orientation of the vertices in a polygon to follow the area that is bounded by the polygon is to the right of the boundary

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ForceRHR(geometry g);

PlaidCloud

func.ST_ForceRHR(geometry g)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.15 - func.ST_ForceSFS

This function supports Polyhedral surfaces, Triangles and Triangulated Irregular Network Surfaces

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ForceSFS(geometry geomA);

PlaidCloud

func.ST_ForceSFS(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.16 - func.ST_ForceCollection

Converts the geometry into a GEOMETRYCOLLECTION

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ForceCollection(geometry geomA);

PlaidCloud

func.ST_ForceCollection(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.17 - func.ST_Force4D

Forces the geometries into XYZM mode

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Force4D(geometry geomA, float Zvalue = 0.0, float Mvalue = 0.0);

PlaidCloud

ST_Force4D(geometry geomA, float Zvalue = 0.0, float Mvalue = 0.0)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.18 - func.ST_Force3DM

Forces the geometries into XYM mode

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Force3DM(geometry geomA, float Mvalue = 0.0);

PlaidCloud

func.ST_Force3DM(geometry geomA, float Mvalue = 0.0)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.19 - func.ST_Force3DZ

This function forces the geoms into XYZ mode. If a geom has no 'Z' compenent, then a 'Z coordinate' is automatically added

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Force3DZ(geometry geomA, float Zvalue = 0.0);

PlaidCloud

func.ST_Force3DZ(geometry geomA, float Zvalue = 0.0)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.6.20 - func.ST_Force3D

Forces the geometries into XYZ mode

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Force3D(geometry geomA, float Zvalue = 0.0);

PlaidCloud

func.ST_Force3D(geometry geomA, float Zvalue = 0.0)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.6.21 - func.ST_Force2D

Forces the geometries into a "2-dimensional mode" so that all output representations will only have the X and Y coordinates

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Force2D(geometry geomA);

PlaidCloud

func.ST_Force2D(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.6.22 - func.ST_FlipCoordinates

Returns a version of the given geometry with X and Y axis flipped

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_FlipCoordinates(geometry geom);

PlaidCloud

func.ST_FlipCoordinates(geometry geom)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.6.23 - func.ST_CurveToLine

Converts a CIRCULAR STRING to LINESTRING or CURVEPOLYGON to POLYGON or MULTISURFACE to MULTIPOLYGON

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_CurveToLine(geometry curveGeom, float tolerance, integer tolerance_type, integer flags);

PlaidCloud

func.ST_CurveToLine(geometry curveGeom, float tolerance, integer tolerance_type, integer flags)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.6.24 - func.ST_CollectionHomogenize

Given a geometry collection, returns the "simplest" representation of the contents

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_CollectionHomogenize(geometry collection);

PlaidCloud

func.ST_CollectionHomogenize(geometry collection)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.6.25 - func.ST_CollectionExtract

Given a geometry collection, return a homogeneous multi-geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_CollectionExtract(geometry collection);

PlaidCloud

func.ST_CollectionExtract(geometry collection)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.6.26 - func.ST_AddPoint

Adds a point to a LineString before point (0-based index)

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AddPoint(geometry linestring, geometry point);

PlaidCloud

func.ST_AddPoint(geometry linestring, geometry point)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7 - Geometry Input

1.4.3.7.1 - func.ST_PointFromGeoHash

Return a point from a GeoHash string

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_PointFromGeoHash(text geohash, integer precision=full_precision_of_geohash);

PlaidCloud

func.ST_PointFromGeoHash(text geohash, integer precision=full_precision_of_geohash)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.7.2 - func.ST_LineFromEncodedPolyline

Creates a LineString from an Encoded Polyline string

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LineFromEncodedPolyline(text polyline, integer precision=5);

PlaidCloud

func.ST_LineFromEncodedPolyline(text polyline, integer precision=5)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.7.3 - func.ST_GMLToSQL

This method implements the SQL/MM specification. SQL-MM 3 5.1.50 (except for curves support)

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GMLToSQL(text geomgml);

PlaidCloud

func.ST_GMLToSQL(text geomgml)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.7.4 - func.ST_GeomFromKML

Constructs a PostGIS ST_Geometry object from the OGC KML representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeomFromKML(text geomkml);

PlaidCloud

func.ST_GeomFromKML(text geomkml)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.7.5 - func.ST_GeomFromGeoJSON

Constructs a PostGIS geometry object from the GeoJSON representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeomFromGeoJSON(text geomjson);

PlaidCloud

func.ST_GeomFromGeoJSON(text geomjson)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.7.6 - func.ST_GeomFromGML

Constructs a PostGIS ST_Geometry object from the OGC GML representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeomFromGML(text geomgml);

PlaidCloud

func.ST_GeomFromGML(text geomgml)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.7.7 - func.ST_GeomFromGeoHash

Return a geometry from a GeoHash string.

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeomFromGeoHash(text geohash, integer precision=full_precision_of_geohash);

PlaidCloud

func.ST_GeomFromGeoHash(text geohash, integer precision=full_precision_of_geohash)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.7.8 - func.ST_Box2dFromGeoHash

Return a BOX2D from a GeoHash string

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Box2dFromGeoHash(text geohash, integer precision=full_precision_of_geohash);

PlaidCloud

func.ST_Box2dFromGeoHash(text geohash, integer precision=full_precision_of_geohash)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.7.9 - func.ST_WKBToSQL

This method implements the SQL/MM specification. SQL-MM 3 5.1.36

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_WKBToSQL(bytea WKB);

PlaidCloud

func.ST_WKBToSQL(bytea WKB)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.7.10 - func.ST_PointFromWKB

This function, takes a binary representation of geometry and a (SRID) and creates the appropriate geometry type - POINT GEOMETRY

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_PointFromWKB(bytea wkb);

PlaidCloud

func.ST_PointFromWKB(bytea wkb); 

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.7.11 - func.ST_LinestringFromWKB

This function, takes a binary representation of geometry and a (SRID) and creates the appropriate geometry type -LINESTRING GEOMETRY

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LinestringFromWKB(bytea WKB);

PlaidCloud

func.ST_LinestringFromWKB(bytea WKB);

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.12 - func.ST_LineFromWKB

This function, takes a binary representation of geometry and a Spatial Reference System ID (SRID) and creates the appropriate geometry type

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LineFromWKB(bytea WKB)  

PlaidCloud

func.ST_LineFromWKB(bytea WKB)  

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.13 - func.ST_GeomFromWKB

This function takes a binary representation of a geometry and a Spatial Reference System ID (SRID) and creates the appropriate geometry type

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeomFromWKB(bytea geom);

PlaidCloud

func.ST_GeomFromWKB(bytea geom);

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.14 - func.ST_GeomFromEWKB

Constructs a PostGIS ST_Geometry object from the OGC Extended Well-Known binary (EWKT) representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeomFromEWKB(bytea EWKB);

PlaidCloud

func.ST_GeomFromEWKB(bytea EWKB)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.15 - func.ST_GeogFromWKB

This function, takes a well-known binary representation (WKB) of a geometry and creates an instance of the appropriate geography type

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeogFromWKB(bytea wkb);

PlaidCloud

func.ST_GeogFromWKB(bytea wkb)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.16 - func.ST_WKTToSQL

This method implements the SQL/MM specification. SQL-MM 3 5.1.34

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_WKTToSQL(text WKT);

PlaidCloud

func.ST_WKTToSQL(text WKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.17 - func.ST_PolygonFromText

Makes a Polygon Geometry from WKT with the given SRID

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_PolygonFromText(text WKT);

PlaidCloud

func.ST_PolygonFromText(text WKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.18 - func.ST_PointFromText

Constructs a PostGIS ST_Geometry point object from the OGC Well-Known text representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_PointFromText(text WKT);

PlaidCloud

func.ST_PointFromText(text WKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.19 - func.ST_MPolyFromText

Makes a MultiPolygon from WKT with the given SRID

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MPolyFromText(text WKT);

PlaidCloud

func.ST_MPolyFromText(text WKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.20 - func.ST_MPointFromText

Makes a Multi-Point Geometry from WKT with the given SRID

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MPointFromText(text WKT);

PlaidCloud

func.ST_MPointFromText(text WKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.21 - func.ST_MLineFromText

Makes a Multi-Line Geometry from Well-Known-Text (WKT) with the given SRID

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MLineFromText(text WKT);

PlaidCloud

func.ST_MLineFromText(text WKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.22 - func.ST_LineFromText

Makes a Linestring Geometry from WKT with the given SRID

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LineFromText(text WKT);

PlaidCloud

func.ST_LineFromText(text WKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.23 - func.ST_GeomFromText

Constructs a PostGIS ST_Geometry object from the OGC Well-Known text representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeomFromText(text WKT);

PlaidCloud

func.ST_GeomFromText(text WKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.24 - func.ST_GeometryFromText

This method implements the SQL/MM specification and OpenGIS simple features implementation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeometryFromText(text WKT);

PlaidCloud

func.ST_GeometryFromText(text WKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.25 - func.ST_GeomFromEWKT

Constructs a PostGIS ST_Geometry object from the OGC Extended Well-Known text (EWKT) representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeomFromEWKT(text EWKT);

PlaidCloud

func.ST_GeomFromEWKT(text EWKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.26 - func.ST_GeomCollFromText

Makes a collection Geometry from the Well-Known-Text (WKT) representation with the given SRID

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeomCollFromText(text WKT, integer srid);

PlaidCloud

func.ST_GeomCollFromText(text WKT, integer srid)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.27 - func.ST_GeographyFromText

Returns a geography object from the well-known text representation. SRID 4326 is assumed if unspecified

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeographyFromText(text EWKT);

PlaidCloud

func.ST_GeographyFromText(text EWKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.28 - func.ST_GeogFromText

Returns a geography object from the well-known text or extended well-known representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeogFromText(text EWKT);

PlaidCloud

func.ST_GeogFromText(text EWKT)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.29 - func.ST_BdMPolyFromText

Construct a Polygon given an arbitrary collection of closed linestrings, polygons, MultiLineStrings as Well-Known text representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_BdMPolyFromText(text WKT, integer srid);

PlaidCloud

func.ST_BdMPolyFromText(text WKT, integer srid)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.30 - func.ST_BdPolyFromText

Construct a Polygon given an arbitrary collection of closed linestrings as a MultiLineString Well-Known text representation

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_BdPolyFromText(text WKT, integer srid);

PlaidCloud

func.ST_BdPolyFromText(text WKT, integer srid)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.7.31 - func.GeometryType

Returns the type of the geometry as a string

Syntax

func.GeometryType()

Examples

Documentation for func.GeometryType is coming soon.

References

PostgreSQL Documentation

1.4.3.8 - Geometry Output

1.4.3.8.1 - func.ST_GeoHash

Return a GeoHash representation (http://en.wikipedia.org/wiki/Geohash) of the geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_GeoHash(geometry geom, integer maxchars=full_precision_of_point);

PlaidCloud

func.ST_GeoHash(geometry geom, integer maxchars=full_precision_of_point)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.2 - func.ST_AsX3D

Returns a geometry as an X3D xml formatted node element

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsX3D(geometry g1, integer maxdecimaldigits=15, integer options=0);

PlaidCloud

func.ST_AsX3D(geometry g1, integer maxdecimaldigits=15, integer options=0)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.3 - func.ST_AsSVG

Return the geometry as Scalar Vector Graphics (SVG) path data

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsSVG(geometry geom, integer rel=0, integer maxdecimaldigits=15);

PlaidCloud

func.ST_AsSVG(geometry geom, integer rel=0, integer maxdecimaldigits=15)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.4 - func.ST_AsTWKB

Returns the geometry in TWKB (Tiny Well-Known Binary) format

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsTWKB(geometry g1, integer decimaldigits_xy=0, integer decimaldigits_z=0, integer decimaldigits_m=0, boolean include_sizes=false, boolean include_bounding boxes=false);

PlaidCloud

func.ST_AsTWKB(geometry g1, integer decimaldigits_xy=0, integer decimaldigits_z=0, integer decimaldigits_m=0, boolean include_sizes=false, boolean include_bounding boxes=false)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.5 - func.ST_AsLatLonText

Returns the Degrees, Minutes, Seconds representation of the point

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsLatLonText(geometry pt, text format='');

PlaidCloud

func.ST_AsLatLonText(geometry pt, text format='')

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.6 - func.ST_AsKML

Return the geometry as a Keyhole Markup Language (KML) element

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsKML(geometry geom, integer maxdecimaldigits=15, text nprefix=NULL);

PlaidCloud

func.ST_AsKML(geometry geom, integer maxdecimaldigits=15, text nprefix=NULL)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.7 - func.ST_AsGML

Return the geometry as a Geography Markup Language (GML) element

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsGML(geometry geom, integer maxdecimaldigits=15, integer options=0);

PlaidCloud

func.ST_AsGML(geometry geom, integer maxdecimaldigits=15, integer options=0)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.8 - func.ST_AsGeoJSON

Return the geometry as a GeoJSON "geometry" object, or the row as a GeoJSON "feature" object

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsGeoJSON(geography geog, integer maxdecimaldigits=9, integer options=0);

PlaidCloud

func.ST_AsGeoJSON(geography geog, integer maxdecimaldigits=9, integer options=0)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.9 - func.ST_AsEncodedPolyline

Returns the geometry as an Encoded Polyline

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsEncodedPolyline(geometry geom, integer precision=5);

PlaidCloud

func.ST_AsEncodedPolyline(geometry geom, integer precision=5)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.10 - func.ST_AsHEXEWKB

Returns a Geometry in HEXEWKB format (as text) using either little-endian (NDR) or big-endian (XDR) encoding

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsHEXEWKB(geometry g1);

PlaidCloud

func.ST_AsHEXEWKB(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.11 - func.ST_AsEWKB

Returns the Well-Known Binary representation of the geometry with SRID metadata

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsEWKB(geometry g1);

PlaidCloud

func.ST_AsEWKB(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.12 - func.ST_AsBinary

Returns the Well-Known Binary representation of the geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsBinary(geometry g1);

PlaidCloud

func.ST_AsBinary(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.13 - func.ST_AsText

Returns the Well-Known Text representation of the geometry/geography

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsText(geometry g1);

PlaidCloud

func.ST_AsText(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.8.14 - func.ST_AsEWKT

Returns the Well-Known Text representation of the geometry prefixed with the SRID

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AsEWKT(geometry g1);

PlaidCloud

func.ST_AsEWKT(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.9 - Geometry Processing

1.4.3.9.1 - func.ST_SetEffectiveArea

Sets the effective area for each vertex, using the Visvalingam-Whyatt algorithm

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_SetEffectiveArea(geometry geomA, float threshold = 0, integer set_area = 1);

PlaidCloud

func.ST_SetEffectiveArea(geometry geomA, float threshold = 0, integer set_area = 1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.2 - func.ST_SimplifyVW

Returns a "simplified" version of the given geometry using the Visvalingam-Whyatt algorithm

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_SimplifyVW(geometry geomA, float tolerance);

PlaidCloud

func.ST_SimplifyVW(geometry geomA, float tolerance)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.3 - func.ST_SimplifyPreserveTopology

Simplifies a geometry by removing points that would fall within a specified distance tolerance.

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_SimplifyPreserveTopology(geometry geomA, float tolerance);

PlaidCloud

func.ST_SimplifyPreserveTopology(geometry geomA, float tolerance)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.4 - func.ST_Simplify

Returns a "simplified" version of the given geometry using the Douglas-Peucker algorithm

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Simplify(geometry geomA, float tolerance, boolean preserveCollapsed);

PlaidCloud

func.ST_Simplify(geometry geomA, float tolerance, boolean preserveCollapsed)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.5 - func.ST_SharedPaths

Returns a collection containing paths shared by the two input geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_SharedPaths(geometry lineal1, geometry lineal2);

PlaidCloud

func.ST_SharedPaths(geometry lineal1, geometry lineal2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.6 - func.ST_Polygonize

Creates a GeometryCollection containing the polygons formed by the constituent linework of a set of geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Polygonize(geometry set geomfield);

PlaidCloud

func.ST_Polygonize(geometry set geomfield)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.7 - func.ST_PointOnSurface

Returns a POINT guaranteed to intersect a surface

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_PointOnSurface(geometry g1);

PlaidCloud

func.ST_PointOnSurface(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.8 - func.ST_OffsetCurve

Return an offset line at a given distance and side from an input line

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_OffsetCurve(geometry line, float signed_distance, text style_parameters='');

PlaidCloud

func.ST_OffsetCurve(geometry line, float signed_distance, text style_parameters='')

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.9 - func.ST_MinimumBoundingCircle

Returns the smallest circle polygon that contains a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MinimumBoundingCircle(geometry geomA, integer num_segs_per_qt_circ=48);

PlaidCloud

func.ST_MinimumBoundingCircle(geometry geomA, integer num_segs_per_qt_circ=48);

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.10 - func.ST_DelaunayTriangles

Return the Delaunay triangulation of the vertices of the input geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_DelaunayTriangles(geometry g1, float tolerance, int4 flags);

PlaidCloud

func.ST_DelaunayTriangles(geometry g1, float tolerance, int4 flags)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.11 - func.ST_ConvexHull

Computes the convex hull of a geometry. The convex hull is the smallest convex geometry that encloses all geometries in the input

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ConvexHull(geometry geomA);

PlaidCloud

func.ST_ConvexHull(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.12 - func.ST_ConcaveHull

The concave hull of a geometry represents a possibly concave geometry that encloses the input geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ConcaveHull(geometry geom, float target_percent, boolean allow_holes = false);

PlaidCloud

func.ST_ConcaveHull(geometry geom, float target_percent, boolean allow_holes = false)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.13 - func.ST_Centroid

Computes a point which is the geometric center of mass of a geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Centroid(geometry g1); 

PlaidCloud

func.ST_Centroid(geometry g1); 

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.14 - func.ST_BuildArea

Creates an areal geometry formed by the constituent linework of the input geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_BuildArea(geometry geom);

PlaidCloud

func.ST_BuildArea(geometry geom)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.15 - func.ST_Buffer

Returns a geometry/geography that represents all points whose distance from this Geometry/geography is less than or equal to distance

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Buffer(geometry g1, float radius_of_buffer, text buffer_style_parameters = '');

PlaidCloud

func.ST_Buffer(geometry g1, float radius_of_buffer, text buffer_style_parameters = '')

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.9.16 - func.St_Accum

Aggregate. Constructs an array of geometries

Syntax

func.ST_Accum()

Examples

Documentation for func.ST_Accum is coming soon.

References

PostgreSQL Documentation

1.4.3.10 - Geometry Validation

1.4.3.10.1 - func.ST_MakeValid

The function attempts to create a valid representation of a given invalid geometry without losing any of the input vertices

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MakeValid(geometry input);

PlaidCloud

func.ST_MakeValid(geometry input)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.10.2 - func.ST_IsValidReason

Returns text stating if a geometry is valid or not an if not valid, a reason why

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_IsValidReason(geometry geomA, integer flags);

PlaidCloud

func.ST_IsValidReason(geometry geomA, integer flags)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.10.3 - func.ST_IsValidDetail

Returns a valid_detail row, formed by a boolean (valid) stating whether the geometry is valid or invalid

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_IsValidDetail(geometry geom, integer flags);

PlaidCloud

func.ST_IsValidDetail(geometry geom, integer flags)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.10.4 - func.ST_IsValid

Test if an ST_Geometry value is well-formed in 2D according to the OGC rules

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_IsValid(geometry g);

PlaidCloud

func.ST_IsValid(geometry g)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.11 - Linear Referencing

1.4.3.11.1 - func.ST_AddMeasure

Return a derived geometry with measure elements linearly interpolated between the start and end points

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_AddMeasure(geometry geom_mline, float8 measure_start, float8 measure_end);

PlaidCloud

func.ST_AddMeasure(geometry geom_mline, float8 measure_start, float8 measure_end)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.11.2 - func.ST_InterpolatePoint

Return the value of the measure dimension of a geometry at the point closed to the provided point

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_InterpolatePoint(geometry line, geometry point);

PlaidCloud

func.ST_InterpolatePoint(geometry line, geometry point)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.11.3 - func.ST_LocateBetweenElevations

Return a derived geometry (collection) value with elements that intersect the specified range of elevations inclusively

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LocateBetweenElevations(geometry geom, float8 elevation_start, float8 elevation_end);

PlaidCloud

func.ST_LocateBetweenElevations(geometry geom, float8 elevation_start, float8 elevation_end)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.11.4 - func.ST_LocateBetween

Return a derived geometry collection with elements that match the specified range of measures inclusively

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LocateBetween(geometry geom, float8 measure_start, float8 measure_end, float8 offset);

PlaidCloud

func.ST_LocateBetween(geometry geom, float8 measure_start, float8 measure_end, float8 offset)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.11.5 - func.ST_LocateAlong

Return a derived geometry collection value with elements that match the specified measure

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LocateAlong(geometry ageom_with_measure, float8 a_measure, float8 offset);

PlaidCloud

func.ST_LocateAlong(geometry ageom_with_measure, float8 a_measure, float8 offset)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.11.6 - func.ST_LineSubstring

Return a linestring being a substring of the input one starting and ending at the given fractions of total 2d length

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LineSubstring(geometry a_linestring, float8 startfraction, float8 endfraction);

PlaidCloud

func.ST_LineSubstring(geometry a_linestring, float8 startfraction, float8 endfraction)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.11.7 - func.ST_LineLocatePoint

Returns a float between 0 and 1 representing the location of the closest point on LineString to the given Point

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LineLocatePoint(geometry a_linestring, geometry a_point);

PlaidCloud

func.ST_LineLocatePoint(geometry a_linestring, geometry a_point)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.11.8 - func.ST_LineInterpolatePoint

Returns a point interpolated along a line

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LineInterpolatePoint(geometry a_linestring, float8 a_fraction);

PlaidCloud

func.ST_LineInterpolatePoint(geometry a_linestring, float8 a_fraction)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.12 - Measurement Functions

1.4.3.12.1 - func.ST_ShortestLine

Returns the 2-dimensional shortest line between two geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ShortestLine(geometry g1, geometry g2);

PlaidCloud

func.ST_ShortestLine(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.2 - func.ST_Project

Returns a point projected from a start point along a geodesic using a given distance and azimuth (bearing)

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Project(geography g1, float distance, float azimuth);

PlaidCloud

func.ST_Project(geography g1, float distance, float azimuth)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.3 - func.ST_Perimeter2D

Returns the 2-dimensional perimeter of a polygonal geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Perimeter2D(geometry geomA);

PlaidCloud

func.ST_Perimeter2D(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.4 - func.ST_Perimeter

Returns the 2D perimeter of the geometry/geography if it is a ST_Surface, ST_MultiSurface (Polygon, MultiPolygon)

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Perimeter(geometry g1);

PlaidCloud

func.ST_Perimeter(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.5 - func.ST_MaxDistance

Returns the 2-dimensional maximum distance between two geometries, in projected units

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MaxDistance(geometry g1, geometry g2);

PlaidCloud

func.ST_MaxDistance(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.6 - func.ST_LongestLine

Returns the 2-dimensional longest line between the points of two geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LongestLine(geometry g1, geometry g2);

PlaidCloud

func.ST_LongestLine(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.7 - func.ST_3DShortestLine

Returns the 3-dimensional shortest line between two geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DShortestLine(geometry g1, geometry g2);

PlaidCloud

func.ST_3DShortestLine(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.8 - func.ST_3DPerimeter

Returns the 3-dimensional perimeter of the geometry, if it is a polygon or multi-polygon

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DPerimeter(geometry geomA);

PlaidCloud

func.ST_3DPerimeter(geometry geomA)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.9 - func.ST_3DMaxDistance

Returns the 3-dimensional maximum cartesian distance between two geometries in projected units

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DMaxDistance(geometry g1, geometry g2);

PlaidCloud

func.ST_3DMaxDistance(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.10 - func.ST_LengthSpheroid

Returns the 2D or 3D length/perimeter of a lon/lat geometry on a spheroid

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LengthSpheroid(geometry a_geometry, spheroid a_spheroid);

PlaidCloud

func.ST_LengthSpheroid(geometry a_geometry, spheroid a_spheroid)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.11 - func.ST_3DLongestLine

Returns the 3-dimensional longest line between two geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DLongestLine(geometry g1, geometry g2);

PlaidCloud

func.ST_3DLongestLine(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.12 - func.ST_3DLength

Returns the 3-dimensional or 2-dimensional length of the geometry if it is a linestring or multi-linestring

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DLength(geometry a_3dlinestring);

PlaidCloud

func.ST_3DLength(geometry a_3dlinestring)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.13 - func.ST_Length2D

Returns the 2D length of the geometry if it is a linestring or multi-linestring

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Length2D(geometry a_2dlinestring);

PlaidCloud

func.ST_Length2D(geometry a_2dlinestring)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.14 - func.ST_Length

Returns the 2D Cartesian length of the geometry for geometry types and uses the inverse geodesic calculation for geography types

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Length(geometry a_2dlinestring);

PlaidCloud

func.ST_Length(geometry a_2dlinestring)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.15 - func.ST_HausdorffDistance

Returns the Hausdorff distance between two geometries, a measure of how similar or dissimilar 2 geometries are

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_HausdorffDistance(geometry g1, geometry g2);

PlaidCloud

func.ST_HausdorffDistance(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.16 - func.ST_DistanceSpheroid

Returns minimum distance in meters between two lon/lat geometries given a particular spheroid

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_DistanceSpheroid(geometry geomlonlatA, geometry geomlonlatB, spheroid measurement_spheroid);

PlaidCloud

func.ST_DistanceSpheroid(geometry geomlonlatA, geometry geomlonlatB, spheroid measurement_spheroid)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.17 - func.ST_Distance

For geometry types returns the minimum 2D Cartesian (planar) distance between two geometries, in projected units (spatial ref units)

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Distance(geometry g1, geometry g2);

PlaidCloud

func.ST_Distance(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.18 - func.ST_3DClosestPoint

Returns the 3-dimensional point on g1 that is closest to g2

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DClosestPoint(geometry g1, geometry g2);

PlaidCloud

func.ST_3DClosestPoint(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.19 - func.ST_ClosestPoint

Returns the 2-dimensional point on g1 that is closest to g2

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ClosestPoint(geometry g1, geometry g2);

PlaidCloud

func.ST_ClosestPoint(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.20 - func.ST_Azimuth

Returns the azimuth in radians of the segment defined by the given point geometries, or NULL if the two points are coincident

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Azimuth(geometry pointA, geometry pointB);

PlaidCloud

func.ST_Azimuth(geometry pointA, geometry pointB)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.21 - func.ST_Area

Returns the area of a polygonal geometry

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Area(geometry g1);

PlaidCloud

func.ST_Area(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.12.22 - func.ST_Length2D_Spheroid

Returns the 2D length or perimeter of the geometry on a spheroid

Syntax

func.ST_Length2D_Spheroid()

Examples

Documentation for func.ST_Length2D_Spheroid is coming soon.

References

PostgreSQL Documentation

1.4.3.13 - Overlay Functions

1.4.3.13.1 - func.ST_UnaryUnion

A single-input variant of ST_Union

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_UnaryUnion(geometry geom, float8 gridSize = -1);

PlaidCloud

func.ST_UnaryUnion(geometry geom, float8 gridSize = -1);

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.13.2 - func.ST_Union

Unions the input geometries, merging geometry to produce a result geometry with no overlaps

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Union(geometry g1, geometry g2);

PlaidCloud

func.ST_Union(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.13.3 - func.ST_SymDifference

Returns a geometry representing the portions of geonetries A and B that do not intersect

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_SymDifference(geometry geomA, geometry geomB, float8 gridSize = -1);

PlaidCloud

func.ST_SymDifference(geometry geomA, geometry geomB, float8 gridSize = -1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.13.4 - func.ST_Subdivide

Divides geometry into parts using rectilinear lines, until each part can be represented using no more than max_vertices

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Subdivide(geometry geom, integer max_vertices=256, float8 gridSize = -1);

PlaidCloud

func.ST_Subdivide(geometry geom, integer max_vertices=256, float8 gridSize = -1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.13.5 - func.ST_Split

The function supports splitting a line by a (multi)point, (multi)line or (multi)polygon boundary, or a (multi)polygon by line

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Split(geometry input, geometry blade);

PlaidCloud

func.ST_Split(geometry input, geometry blade)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.13.6 - func.ST_Node

Returns a (Multi)LineString representing the fully noded version of a collection of linestrings

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Node(geometry geom);

PlaidCloud

func.ST_Node(geometry geom)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.13.7 - func.ST_MemUnion

Aggregate function which unions geometry in a memory-efficent but slower way

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_MemUnion(geometry set geomfield);

PlaidCloud

func.ST_MemUnion(geometry set geomfield)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.13.8 - func.ST_Intersection

Returns a geometry representing the point-set intersection of two geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Intersection( geography geogA , geography geogB );

PlaidCloud

func.ST_Intersection( geography geogA , geography geogB )

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.13.9 - func.ST_Difference

Returns a geometry representing the part of geometry A that does not intersect geometry B

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Difference(geometry geomA, geometry geomB, float8 gridSize = -1);

PlaidCloud

func.ST_Difference(geometry geomA, geometry geomB, float8 gridSize = -1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.13.10 - func.ST_ClipByBox2D

Clips a geometry by a 2D box in a fast and tolerant but possibly invalid way

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ClipByBox2D(geometry geom, box2d box);

PlaidCloud

func.ST_ClipByBox2D(geometry geom, box2d box)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.14 - Spatial Reference System Functions

1.4.3.14.1 - func.ST_Transform

Returns a new geometry with its coordinates transformed to a different spatial reference system

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Transform(geometry g1, integer srid);

PlaidCloud

func.ST_Transform(geometry g1, integer srid)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.14.2 - func.ST_SRID

Returns the spatial reference identifier for the ST_Geometry as defined in spatial_ref_sys table

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_SRID(geometry g1);

PlaidCloud

func.ST_SRID(geometry g1)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.14.3 - func.ST_SetSRID

Sets the SRID on a geometry to a particular integer value

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_SetSRID(geometry geom, integer srid);

PlaidCloud

func.ST_SetSRID(geometry geom, integer srid)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.14.4 - func.Find_SRID

Returns the integer SRID of the specified geometry column

Syntax

func.Find_SRID()

Examples

Documentation for func.Find_SRID is coming soon.

References

PostgreSQL Documentation

1.4.3.15 - Spatial Relationships

1.4.3.15.1 - func.ST_PointInsideCircle

Returns true if the geometry is a point and is inside the circle with center center_x,center_y and radius radius

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_PointInsideCircle(geometry a_point, float center_x, float center_y, float radius);

PlaidCloud

func.ST_PointInsideCircle(geometry a_point, float center_x, float center_y, float radius)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.2 - func.ST_DWithin

Returns true if the geometries are within a given distance

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_DWithin(geometry g1, geometry g2, double precision distance_of_srid);

PlaidCloud

func.ST_DWithin(geometry g1, geometry g2, double precision distance_of_srid)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.3 - func.ST_DFullyWithin

Returns true if the geometries are entirely within the specified distance of one another

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_DFullyWithin(geometry g1, geometry g2, double precision distance);

PlaidCloud

func.ST_DFullyWithin(geometry g1, geometry g2, double precision distance)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.4 - func.ST_Within

Returns TRUE if geometry A is completely inside geometry B

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Within(geometry A, geometry B);

PlaidCloud

func.ST_Within(geometry A, geometry B)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.5 - func.ST_Touches

Returns TRUE if the only points in common between g1 and g2 lie in the union of the boundaries of g1 and g2

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Touches(geometry g1, geometry g2);

PlaidCloud

func.ST_Touches(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.6 - func.ST_RelateMatch

Tests if a Dimensionally Extended 9-Intersection Model (DE-9IM) intersectionMatrix value satisfies an intersectionMatrixPattern

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_RelateMatch(text intersectionMatrix, text intersectionMatrixPattern);

PlaidCloud

func.ST_RelateMatch(text intersectionMatrix, text intersectionMatrixPattern)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.7 - func.ST_Relate

These functions allow testing and evaluating the spatial (topological) relationship between two geometries

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Relate(geometry geomA, geometry geomB);

PlaidCloud

func.ST_Relate(geometry geomA, geometry geomB)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.8 - func.ST_OrderingEquals

Compares two geometries and returns it (TRUE) if the geometries are equal and the coordinates are in the same order

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_OrderingEquals(geometry A, geometry B);

PlaidCloud

func.ST_OrderingEquals(geometry A, geometry B)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.9 - func.ST_Overlaps

Returns TRUE if the Geometries "spatially overlap"

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Overlaps(geometry A, geometry B);

PlaidCloud

func.ST_Overlaps(geometry A, geometry B)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.10 - func.ST_Intersects

If a geometry or geography shares any portion of space then they intersect

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Intersects( geometry geomA , geometry geomB );

PlaidCloud

func.ST_Intersects( geometry geomA , geometry geomB )

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.11 - func.ST_Equals

Returns TRUE if the given Geometries are "spatially equal"

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Equals(geometry A, geometry B);

PlaidCloud

func.ST_Equals(geometry A, geometry B)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.12 - func.ST_Disjoint

Overlaps, Touches, Within imply geometries are not spatially disjoint, unless they return true, then they are not spatially disjoint

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Disjoint( geometry A , geometry B );

PlaidCloud

func.ST_Disjoint( geometry A , geometry B )

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.13 - func.ST_LineCrossingDirection

Given 2 linestrings, returns an integer between -3 and 3 indicating what kind of crossing behavior exists between them

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_LineCrossingDirection(geometry linestringA, geometry linestringB);

PlaidCloud

func.ST_LineCrossingDirection(geometry linestringA, geometry linestringB)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.14 - func.ST_3DDWithin

For geometry type returns true if the 3d distance between two objects is within distance_of_srid specified projected units

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DDWithin(geometry g1, geometry g2, double precision distance_of_srid);

PlaidCloud

func.ST_3DDWithin(geometry g1, geometry g2, double precision distance_of_srid)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.15 - func.ST_Crosses

Takes two geometry objects and returns TRUE if their intersection "spatially cross", but not all interior points in common

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Crosses(geometry g1, geometry g2);

PlaidCloud

func.ST_Crosses(geometry g1, geometry g2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.16 - func.ST_3DDFullyWithin

Returns true if the 3D geometries are fully within the specified distance of one another

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DDFullyWithin(geometry g1, geometry g2, double precision distance);

PlaidCloud

func.ST_3DDFullyWithin(geometry g1, geometry g2, double precision distance)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.17 - func.ST_CoveredBy

Returns 1 (TRUE) if no point in Geometry/Geography A is outside Geometry/Geography B

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_CoveredBy(geometry geomA, geometry geomB);

PlaidCloud

func.ST_CoveredBy(geometry geomA, geometry geomB)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.18 - func.ST_ContainsProperly

Returns true if B intersects the interior of A but not the boundary (or exterior)

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ContainsProperly(geometry geomA, geometry geomB);

PlaidCloud

func.ST_ContainsProperly(geometry geomA, geometry geomB);

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.19 - func.ST_Covers

Returns 1 (TRUE) if no point in Geometry/Geography B is outside Geometry/Geography A

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Covers(geometry geomA, geometry geomB);

PlaidCloud

func.ST_Covers(geometry geomA, geometry geomB)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.20 - func.ST_Contains

Geo 'A' contains Geo 'B' ONLY IF no points of B lie in the exterior of A, and at least one point of B interior lies in A interior

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_Contains(geometry geomA, geometry geomB);

PlaidCloud

func.ST_Contains(geometry geomA, geometry geomB)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.15.21 - func.ST_3DIntersects

Overlaps, Touches, Within all imply spatial intersection and if any returns true, then the geometries also spatially intersect

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_3DIntersects( geometry geomA , geometry geomB );

PlaidCloud

func.ST_3DIntersects( geometry geomA , geometry geomB )

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation

1.4.3.16 - Trajectory Functions

1.4.3.16.1 - func.ST_DistanceCPA

Returns the minimum distance two moving objects have ever been each other

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_DistanceCPA(geometry track1, geometry track2);

PlaidCloud

func.ST_DistanceCPA(geometry track1, geometry track2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.16.2 - func.ST_CPAWithin

Checks whether two moving objects have ever been within the specified maximum distance

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_CPAWithin(geometry track1, geometry track2, float8 maxdist);

PlaidCloud

func.ST_CPAWithin(geometry track1, geometry track2, float8 maxdist)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.16.3 - func.ST_ClosestPointOfApproach

Returns the smallest measure at which points interpolated along the given trajectories are at the smallest distance

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_ClosestPointOfApproach(geometry track1, geometry track2);

PlaidCloud

func.ST_ClosestPointOfApproach(geometry track1, geometry track2)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

1.4.3.16.4 - func.ST_IsValidTrajectory

Tests if a geometry encodes a valid trajectory

Description

PlaidCloud expressions and filters provide use of most non-administrative PostGIS methods. PostGIS methods are accessed by prefixing the standard method name with func..

Examples

SQL

ST_IsValidTrajectory(geometry line);

PlaidCloud

func.ST_IsValidTrajectory(geometry line)

References

PostGIS Official Documentation for this method can be found here.

Additional capabilities and usage examples can be found in the PostGIS documentation.

2 - Identity and Access Management (IAM)

Manage the permissions for accessing PlaidCloud

Identity and Access Management

2.1 - Overview

A general overview of how members and organizations work.

2.1.1 - Organizations and Workspaces Explained

Learn about the differences between Organizations and Workspaces

Organizations are a collection of one or more workspaces. All data and projects exist within workspaces. Organizations only serve as a way to manage multiple workspaces.

Security and access controls are managed by each workspace to cater to the workspace's unique role within the organization. PlaidCloud’s workspaces aim to maximize collaboration and increase information access while restricting access to private or confidential information.

In PlaidCloud, Organizations serve as the foundation, while Workspaces are designed to help support unique needs. With PlaidCloud being a multi-tenant workspace service, it provides flexibility by eliminating the need to perform technical configurations of isolated workspace environments. PlaidCloud is designed to provide maximum collaboration and flexibility while ensuring that privacy and confidentiality are never compromised through complete isolation of people and data by workspace.

PlaidCloud’s Organizations makes managing small teams, large teams, and multinational organizations easy. It allows you to easily integrate authentication and member management into existing systems or, if you choose to, manually manage them. PlaidCloud’s multiple tiers of access control simultaneously minimizes management overhead and keeps data and activities compartmentalized.

While this may sound complex, we keep the process as simple as possible, so getting started and scaling up is not difficult.

PlaidCloud is broken down into the following levels of access control:

PlaidCloud Organizations, Workspaces, and ProjectsEach progressive layer of control enables administrators to apply access controls and permissions for certain operations.

2.1.2 - Viewing and Managing Workspaces

Control how workspaces are configured and accessed

Workspaces allow an Organization to operate as its own cloud-based service for small to large Organizations. For example, small teams may have a single workspace in their Organization, while large Organizations may have hundreds of specialized workspaces.

Workspaces manage access and visibility while providing isolated areas for an Organization’s members to operate. Workspace access is assigned to members in a private, multi-tenant environment for the Organization. With workspaces, teams can collaborate on open projects within some workspaces while maintaining strict confidentiality in other workspaces.

Since workspaces are fully isolated, data cannot be directly shared or accessed across workspaces. However, workspaces can access the same shared Document area, so that sharing of files between workspaces is possible if desired.

Viewing and Managing Workspaces

Viewing and managing workspaces within an Organization is simple. You must be an Organization owner to manage workspaces. To view and manage workspaces:

  1. Select “Organization Settings” from the menu in the upper right of the browser
  2. Click “Workspaces”

This will bring you to a table showing all the current workspaces within the Organization. From here you can create, update, suspend, and delete workspaces, add apps to workspaces, and manage member access to each workspace.

Creating a Workspace

  1. Select “Organization Settings” from the menu in the upper right of the browser
  2. Click “Workspaces”
  3. Click the “New Workspace” button
  4. Complete the required fields
  5. Click “Submit”

Updating a Workspace

  1. Select “Organization Settings” from the menu in the upper right of the browser
  2. Click “Workspaces”
  3. Click the edit icon of the desired workspace
  4. Adjust the fields as desired
  5. Click “Submit”

Suspending a Workspace

  1. Select “Organization Settings” from the menu in the upper right of the browser
  2. Click “Workspaces”
  3. Uncheck the “Active” checkbox of the desired workspace
  4. Click “Submit”

Deleting a Workspace

  1. Select “Organization Settings” from the menu in the upper right of the browser
  2. Click “Workspaces”
  3. Click the delete icon of the desired workspace
  4. Click “Delete” again

Managing Apps Available in Workspace

By default, new workspaces have three apps automatically added: Analyze, Document, and Identity. While Identity cannot be removed because it is essential to managing access and roles within a workspace, Analyze and Document can be removed. To manage which apps are available in a workspace, including custom apps:

  1. Select “Organization Settings” from the menu in the upper right of the browser
  2. Click “Workspaces”
  3. Click on the apps icon for the workspace you want to modify the associated apps
  4. If you want to remove and app, click on the delete icon for the app to remove and confirm the deletion
  5. If you want to add a new app, click on the Add App to Workspace button, select the app you want to add, check the Enable for Use checkbox, and click the create button

2.1.3 - Managing Workspace Members

Add, remove, and update access for members to workspaces

While members may be associated with other workspaces within an Organization, each workspace has it's own access restrictions. Members must be granted permission to view and access a workspace.

Adding Members

To add a member:

  1. Select “Organization Settings” from the menu in the upper right of the browser
  2. Click “Workspaces”
  3. Click the members icon
  4. Select the desired member and drag them to the appropriate column
  5. Click “Submit”

To send an invite:

  1. Select “Organization Settings” from the menu in the upper right of the browser
  2. Click “Workspaces”
  3. Click the invite icon

This process will send an email invitation to the member. The member then needs to click the link in the email and follow the directions to login or create an account if they are new to PlaidCloud. After a successful login, the member will be added to the workspace.

Removing Members

To remove a member:

  1. Select “Organization Settings” from the menu in the upper right of the browser
  2. Click “Workspaces”
  3. Click the members icon
  4. Select the desired member and drag them to the appropriate column
  5. Click “Submit”

2.2 - Managing Security Groups and Assignments

Manage security group settings and view membership

PlaidCloud’s security and access management is straightforward. A member is granted or denied access based on the groups in which a member is associated. Adding or changing a member’s security association is easily customizable.

Managing Security Groups

Security groups can be added, updated, or deleted.

To manage security groups:

  1. Open Identity
  2. Select the “Security” tab
  3. Click “Security Groups” in the dropdown menu (this will display a form with existing groups)
  4. To add a group, click the “Create Security Group”
  5. To edit permissions of a group, click on the left-most icon

To manage group members:

  1. Open Identity
  2. Select the “Security” tab
  3. Click “Security Groups” in the dropdown menu
  4. Click the Member icon
  5. Drag desired members from the “Unassigned Members” column to the “Assigned Members” column or vice versa to remove members

Setting Default Security Groups

To reduce the time needed for adding new members, identify a set of default security groups. This provides a baseline set of security groups for new members without needing to manually assign each person. The setting is available when adding a new security group if you check the box at the bottom of the Security Group window that reads “Assign to New Users by Default”.

Performing a Security Audit

The security audit capability provides the ability to see group membership across all members and groups.

To perform a security audit:

  1. Open Identity
  2. Select the “Security” tab
  3. Click “Security Group Audit” in the dropdown menu

As all tables in PlaidCloud are exportable as a CSV file format, the group member associations are reviewable outside of PlaidCloud for either historical purposes or just some fun off-line viewing.

To export from the “Security Group Audit” form:

  1. Open Identity
  2. Select the “Security” Tab
  3. Click “Security Group Audit” in the dropdown menu
  4. Click the small icon to the far right of “Username” in the table
  5. Click “Export CSV” or “Export XLXS” depending on your preference

Viewing Available Permission Settings

Each application being used in the workspace has specific available permissions. The security group permissions are based on these application permissions.

The complete list of available permission for each application is viewable from the Security Bin.

To access the Security Bin:

  1. Open Identity
  2. Select the “Security”
  3. Click “Security Bins” in the dropdown menu

To view the detailed security settings for each application, select the tags icon on the far left.

This available security settings information is informational only. For details on managing permissions, refer to the Managing Security Groups section above.

2.3 - Member (User) Identity

Authentication and Role-based security

PlaidCloud makes authentication and role-based security easy to control from one centralized location: the “Identity” tab, located on the left side of the screen. Identity provides the foundation for member management, security, and different types of authentication processes.

Member management includes everything from viewing current members and adding new members to sending mass emails.

Security is a priority for PlaidCloud. The Security subset of the Identity tab allows you to perform security audits, set up security groups and default security groups for new members, and control the approved security level of each member.

Authentication is where security starts. PlaidCloud offers multiple authentication options to support most use cases:

  • Password Only
  • Two-Factor Authentication
  • Single Sign-On

2.4 - Member Management

Add, remove, suspend members from workspace

Identity provides the ability to add, remove, and/or suspend members of the workspace. Since PlaidCloud members can be members of multiple workspaces, removing a member from the workspace does not delete the member account from PlaidCloud.

New Members

Adding New Members

To add members:

  1. Open Identity
  2. Select the “Member” tab
  3. Click “All” in the dropdown menu to display members
  4. Click “Add Workspace Member”
  5. Complete all required fields on the member form
  6. Click the “Create” button

New Member Welcome Email

After adding a new member, a welcome email with sign-in credentials will be sent to their provided email address. The welcome email can be customized to provide additional information relevant to the new member’s PlaidCloud use.

To update or view the welcome email:

  1. Open Identity
  2. Select the “Member” tab
  3. Click “Email Welcome Message” from the dropdown menu
  4. Make any additions or changes desired
  5. Click the “Update” button

Viewing and Managing Member Sessions

To view the current member sessions:

  1. Open Identity
  2. Select the “Member” tab
  3. Click “Session Manager” in the dropdown menu

From this table, it’s possible to view session information (current sessions and last activity), as well as terminate sessions if desired.

To terminate a session:

  1. Highlight the member(s) you wish to logout
  2. Click the “End Selected Sessions” button in the upper left

Managing Distribution (Distro) Lists

Distribution lists, Distros, are simply email distribution lists managed within PlaidCloud. They provide an easy way to quickly send reports, files, and/or other information to groups. The Distribution list feature allows for the management of lists on a workspace by workspace basis. This eliminates the need to rely on external lists that may over or undercover the intended audience.

To manage lists:

  1. Open Identity
  2. Select the “Distro Lists” tab
  3. Click the “Create New Distro List” button to create a new list
  4. Complete all required fields of the Distro List form
  5. Click the “create” button

To manage workspace members for each list:

  1. Select the workspace icon (cloud) in the table
  2. To manage non-members, click on the globe icon.

2.5 - Member Authentication

Change Passwords and Authentication options

The Identity tab houses the security and authentication features that PlaidCloud focuses on in order to ensure a secure member platform. PlaidCloud offers three options for authentication types. They are:

  • Password Only
  • Two-Factor Authentication
  • Single Sign-On

The default authentication type is password only. However, two-factor authentication can also be activated. If a Single Sign-On SAML authentication provider is available, you can configure your PlaidCloud organization to use Single Sign-On.

If you choose to create a personal account, the default authentication type is password only. To change this to a two-factor authentication, reference the steps under the Two-Factor section.

Changing Passwords

For members using two-factor or password-only authentication, password changes are simple and can be performed under the “Member” menu (gravatar icon) in the upper right corner.

To change passwords:

  1. Select the icon (gravatar) in the upper right

    • The “Member” menu icon will be different for each user
  2. Click “Change Password” in the dropdown menu

  3. Enter your current password where requested

  4. Enter your new password where requested

  5. Re-enter your password (for confirmation)

  6. Click the “Update” button to save

Password Only Authentication

Password-only authentication is the simplest and least secure option, even with long cryptic passwords. This option may be ideal for those looking to maintain quick and convenient access without too much concern about security risks. Password-only authentication continues to be a common practice but we highly recommend using Two-Factor instead.

Two-Factor Authentication

Two-Factor, or multi-factor, authentication provides a substantial increase in security over password-only because it requires both something “you know” (the password) and something “you have” (the access key). In other words, the password alone will not enable access.

Passwords are susceptible to security threats because they represent a single piece of information that a malicious actor needs to gain access; two-factor provides additional security by requiring additional information to sign in. For this reason we strongly urge you to use two-factor for the safety of your account, not only on PlaidCloud, but on other websites that support it.

Enabling Two-Factor

To enable two-factor and set your authentication code preferences:

  1. Select the icon (gravatar) in the upper right
  2. Click “Manage Multi-Factor Authentication” in the dropdown menu
  3. Select your preferred type of two-factor authentication code delivery.

Types of Two-Factor Authentication

PlaidCloud has three options for receiving this additional information:

  • Via smartphone app (e.g. Google Authenticator, Authy, Okta, FreeOTP, etc…)
  • Via text message (or SMS)
  • Via a YubiKey from Yubico <http://yubico.com>

Smartphone-based Authentication

To get your code via a smartphone app, you will need to download an authenticator app, such as Google Authenticator, for your iOS or Android device. Note that there are other compatible authenticator apps that can be used, but this article assumes you’re using the Google Authenticator app.

After downloading the app, open it and follow the in-app setup instructions.

Once you have the authenticator set up:

  1. Tap the “+” button
  2. Select “Scan barcode”
  3. Open “Manage Multi-Factor Authentication” under the gravatar icon on PlaidCloud
  4. Select “Configure Authenticator” on PlaidCloud
  5. When prompted, use your phone to scan the QR code displayed on PlaidCloud
  6. After scanning the QR code, your authenticator app should display a six-digit authentication code which changes every 30 seconds
  7. Enter this code into the text box at the bottom of the PlaidCloud “Configure SmartPhone Authentication” screen which should still be pulled up from the previous steps
  8. Select “Verify.”
  9. If the code is valid, Two-Factor will be enabled for your account and you will be shown a list of backup codes.
  10. Once enabled, you can select “Manage Multi-Factor Authentication” again to view your backup codes or to disable two-factor.

SMS-based Authentication

To use SMS-based Authentication:

  1. Open “Manage Multi-Factor Authentication” under the gravatar icon on PlaidCloud
  2. Select “Configure SMS” on PlaidCloud
  3. Enter your mobile phone number and carrier
  4. Click “Submit”
  5. You will then be sent a text message containing an authentication code
  6. Enter this code in the window that appears in PlaidCloud
  7. If the code is valid, two-factor will be enabled for your account and SMS will send you a different code to enter whenever you log in
  8. Once enabled, you can select “Manage Multi-Factor Authentication” again to update your contact information or to disable two-factor.

YubiKey Authentication

If using Yubikeys – hardware authentication devices manufactured by Yubico – members can register up to five YubiKeys for their account. We have both a managed pool of PlaidCloud YubiKeys that can be administered by the person responsible for your workspace access security, or members can choose to use any standard YubiKey.

To enable YubiKey authentication, you must first register at least one YubiKey.

To register a YubiKey:

  1. Select the icon (gravatar) in the upper right
  2. Click “Change Registered YubiKeys” in the dropdown menu
  3. Place the cursor in an open spot on the “My Registered YubiKeys” form
  4. Insert the YubiKey into your computer
  5. Press the YubiKey one-time password (OTP) button
  6. When the OTP is filled in, click the “Update” button in the form to save

After you register at least one YubiKey you can configure it to your account.

To configure a YubiKey:

  1. Select the gravatar icon
  2. Click “Manage Multi-Factor Authentication”
  3. Select “Configure YubiKey”
  4. Enter one of your YubiKey OTPs in the provided form.

If the OTP is valid, two-factor will be enabled for your account and you will need to enter a YubiKey OTP each time you log in.

PlaidCloud YubiKey Pool

The Managed YubiKey Pool provides an easy way to manage two-factor authentication for members of the workspace. The managed keys are branded with the PlaidCloud logo and can be shipped directly to members or in bulk to an administrator.

The managed pool provides advantages over individual Yubikeys in the following ways:

  • Lost keys are easily replaced without the member needing to store recovery codes
  • Assignment of keys is point and click. Members don’t have to register the key.
  • View YubiKey assignments and revoke keys with a point and click interface
  • Order and ship new keys directly to members
  • Managed YubiKeys are fully compatible with other services that accept YubiKey OTPs
  • YubiKeys can be reassigned to other members without compromising security as member turn-over occurs

To order new keys:

  1. Open Identity
  2. Select the “Security” tab
  3. Click “PlaidCloud Security Keys” in the dropdown menu
  4. Click the “Order More Keys” button in the form

If managed keys were ordered, they will appear in the managed keys table.

From the key assignment form, keys can be assigned, marked as unassigned, or marked as lost. In addition, each key can have a memo attached for keeping track of notes related to issuance of the key. To do this simply click the edit icon and make the desired adjustments.

Managed keys are a one-time cost. There are no additional on-going charges for their use. Managed Yubikeys are $30 each plus shipping.

What Recovery Codes Do

For security reasons, PlaidCloud Support cannot immediately restore access to accounts with two-factor authentication enabled if you lose your phone or YubiKey. Recovery codes allow for you to still access your account with a lost phone or YubiKey and then reconfigure it from there.

After successfully setting up your two-factor authentication, you’ll be provided with a set of randomly generated recovery codes that you should view and save. We strongly recommend saving your recovery codes immediately. However, these codes can be downloaded at any point after enabling two-factor authentication. For more information, see Downloading your two-factor authentication recovery codes.

Lost YubiKey

You can provide an SMS number as part of your profile. If you lose access to both your registered set of YubiKeys and your recovery codes, a backup SMS number can get you back in to your account.

If the member is using a managed pool key and loses it, the workspace pool administrator can mark the key as lost and issue a new one. This reduces the risk of being locked out of an account or having to retain recovery codes.

To mark a key as lost:

  1. Open Identity
  2. Select “Security”
  3. Click “PlaidCloud Security Keys”
  4. Click the edit icon
  5. Select “Lost” under the Key Usage Information section
  6. Click “Update”

This will mark the key as lost and allow you to issue a new one.

Single Sign-On

Single Sign-On requires an external service to perform the actual authentication process, and PlaidCloud simply receives a positive or negative response. Use of Single Sign-On can reduce the administrative requirements for managing passwords across multiple applications and ensure good member management practices when employees leave or access restrictions are applied.

Single Sign-On is the easiest option for members to use. It is as secure as the authentication process the external party uses. Single Sign-On helps ensure passwords are up-to-date and synchronized with other services the member interacts with.

While Single Sign-On does require a more extensive authentication process behind the scenes, and usually requires technical coordination with IT and/or network security, it can be used by anyone, although it is typically used by larger companies and academic institutions.

For more information on setting up and managing Single Sign-On see the Organization and Workspace management area.

2.6 - Advanced Operations

Administrator access, single sign on (SSO), and member expiration periods.

2.6.1 - Manage Organization Administrators

Add, remove, and update members responsible for managing an organization

Organizations in PlaidCloud provide a top level area to control options such as single sign-on and member access capabilities. Organizations each contain at least one workspace, which allows workspaces to serve as the main level of tenant separation within PlaidCloud. A workspace helps to align teams with specific areas of interest and isolate access as appropriate. PlaidCloud allows Organizations to have an unlimited number of workspaces.

Managing Organization Administrators

Each Organization in PlaidCloud can assign multiple administrators. Administrators have special privileges to control the Organization. They can do things such as manage billing, update access management, and perform workspace management. To manage administrators:

  1. Select the “Organization Settings” menu from the top right of screen
  2. Click “Administrators”

This will display the table of current administrators. After the table opens, you may add new administrators, delete existing administrators, or alter administrative privileges.

Adding an Administrator

To add an administrator:

  1. Select the “Organization Settings” menu from the top right of screen
  2. Click “Administrators”
  3. Click the “Add Organization Administrator” button
  4. Complete the required fields
  5. Click “Add as Administrator”

Deleting an Administrator

To delete an administrator:

  1. Select the “Organization Settings” menu from the top right of screen
  2. Click “Administrators”
  3. Click the delete icon of the desired administrator
  4. Confirm and click “Delete as Administrator”

2.6.2 - Managing Single Sign-On for Organization

Set up SAML 2.0 authorization along with attribute passing

Each Organization can have a custom url (https://plaidcloud.com/sso/<custom_name_here>) for members to access the single sign-on page you specified in the configuration.

To create a custom URL:

  1. Select the “Organization Settings” menu from the top right of screen
  2. Click “Single Sign-On Security Credentials”
  3. Adjust the Single Sign-On URL as desired
  4. Click “Update Organization SSO Settings”

Allow Creation of Users Automatically

If Single Sign-On is enabled, you can choose to automatically create members based on successful Single Sign-On authentication. New members will receive the default workspace and security roles specified in the Organization settings. To automatically create members:

  1. Select the “Organization Settings” menu from the top right of screen
  2. Click “Organization and User Settings”
  3. Check the “Create Users Automatically from Single Sign-On” box
  4. Choose the desired default workspace

Use of this feature greatly simplifies member management because new members will automatically have access without any additional setup in PlaidCloud. Similarly, if members are removed from the Single Sign-On facility, they will no longer have access to PlaidCloud.

Allow Security Group Assignments from Single Sign-On

If Single Sign-On is enabled, you can choose to pass a group association list along with the positive authentication message. The list’s items will be used to assign a member to the specified groups and remove them from any not specified. This is an effective way to manage security group assignments by using a central user management service such as Active Directory or other LDAP service.

If this option is enabled, security roles will be assigned using the supplied list the next time a member signs in. If the option is disabled, existing members will retain their current security roles until manually updated within PlaidCloud.

2.6.3 - Setting Member Expiration Period

Set member logins to expire after a specified period and remove from organization

If retaining inactive members within PlaidCloud is not desired, members can be set for automatic removal from the Organization after a specified period of inactivity using the expiration capabilities PlaidCloud offers. This automated removal of dormant members can be set as short as one day, if desired.

To set expiration of members:

  1. Select the “Organization Settings” menu from the top right of screen
  2. Click “Organization and User Settings”
  3. Set the desired number of days until expiration
  4. Click Update

3 - Jupyter and Command Line Interfaces

Allow access to PlaidCloud directly via Jupyter Notebooks, command line interfaces, and API access through OAuth Tokens.

3.1 - Jupyter Notebooks

Interact with PlaidCloud directory from Jupyter Notebooks

Jupyter Notebooks and Jupyter Lab provide exceptional interactive capabilities to analyze, explore, explain, and report data. PlaidCloud enables use of information directly in notebooks.

Install Jupyter Notebook

This assumes you have a working Jupyter Notebook installation.

Installing a Stand-Alone Jupyter Notebook

For more information on installing a Jupyter Notebook locally you can reference Jupyter’s installation documentation.

Add to VS Code

VS Code also provides an extension that allows you to run notebooks directly in VS Code. Install the extension from the Visual Studio Marketplace

Install PlaidCloud Utilities

While PlaidCloud can be accessed using stand OAuth and JSON-RPC requests, it is recommended that you use our pre-built libraries for simplified access. In addition, the PlaidCloud utilities library includes handy data helpers for use with Pandas dataframes.

To install the PlaidCloud Utilities perform the following pip installs:

pip install plaidcloud-rpc@git+https://github.com/PlaidCloud/plaid-rpc.git@v1.1.4#egg=plaidcloud-rpc
pip install plaidcloud-utilities@git+https://github.com/PlaidCloud/plaid-utilities.git@v1.1.9#egg=plaidcloud-utilities

Obtaining an OAuth Token

See OAuth Tokens for more information on obtaining an OAuth token and how to configure the system for automated auth.

Open Jupyter Notebook User Interface

Launch your notebook server to get started.

Once you are signed into your Jupyter notebook server, create a new notebook from the UI.

This will open a blank notebook.

Create a connection to communicate with PlaidCloud through the API endpoints

from plaidcloud.utilities.connect import PlaidConnection

conn = PlaidConnection()

Establish a local table object and then query it with the results automatically placed in a Pandas dataframe.

tbl_sf_cust_master = conn.get_table('Salesforce_Customer_Master') # This gets a table object
df_sf_cust_master = conn.get_data(tbl_sf_cust_master) # This retrieves all the data into a dataframe

With that same table object you can also write more advanced queries using standard SQLAlchemy syntax.

df_sf_cust_master_w_sales = conn.get_data(
    tbl_sf_cust_master.select().with_only_columns(
        [tbl_sf_cust_master.c.Id, tbl_sf_cust_master.c.CurrencyIsoCode, tbl_sf_cust_master.c.SyDSalesRegion]
    ).where(
        tbl_sf_cust_master.c.TotalSalesPast3Years > 0
    )
)

3.2 - Command Line

Interact with PlaidCloud directory from command line

PlaidCloud uses standard JSON-RPC requests and can be used with any application that can perform those requests.

To make things easier, a Python package is available to simplify the connection and API running process.

Required Installation

From a terminal run the following command:

pip install plaidcloud-rpc

Using the SimpleRPC Object to Make a Request

To make a request using the plaidcloud-rpc package use the SimpleRPC object.

from plaidcloud.rpc.connection.jsonrpc import SimpleRPC  
  
auth_token = "Your PlaidCloud Auth Token" # See Obtaining Token below  
endpoint_uri = "plaidcloud.com" # or plaidcloud.net  
rpc = SimpleRPC(auth_token, endpoint_uri)

Once you have the SimpleRPC object instantiated you can then issue RPC request to PlaidCloud. This example requests the meta data for a table.

table = rpc.analyze.table.table(  
            project_id=project_id,  
            table_id=table_id  
        )

What APIs are Available?

There are many APIs available for use that control nearly every aspect of PlaidCloud. All of the APIs, the inputs, and expected outputs are documented in the APIs documentation.

Obtaining an OAuth Token

See OAuth Tokens for more information on obtaining an OAuth token and how to configure the system for automated auth.

3.3 - OAuth Tokens

Obtaining OAuth tokens to interact with PlaidCloud APIs

PlaidCloud uses standard JSON-RPC requests and can be used with any application that can perform those requests. Requests are secured using OAuth tokens.

Obtaining an OAuth Token

OAuth tokens are generated from the PlaidCloud app. To view the list of current OAuth tokens assigned to you and generate new ones, navigate to Analyze > Tools > Registered Systems.

Once there you can view any existing tokens or choose to create a new one.

Download OAuth PlaidCloud Config File

Select “Register a New System”.

Fill out the form and note the name you entered so you can find it in the list.

Once created, open the registered system record by clicking on the gear icon. This will display the configuration file text.

NOTE: Be sure to select the project you want to use this connection for from the drop down at the top. It will add the Project Unique Identifier to the configuration.

Copy this text into a plaid.conf file located on your system. Place this in the .plaid directory.

Create a Config File Locally

Create a directory one level up from your notebook directory or from where you plan to use command line interaction. Name the directory .plaid.

Inside the .plaid directory, create a file called plaid.conf and paste the contents you copied above into the file. Save the file and this will no allow you to connect using the PlaidCloud utilities and rpc methods.

Advanced Uses

While it is convenient to locate the .plaid folder near its usage point, it can actually be placed anywhere in the upstream directory tree. The initialization process will traverse up the directory tree until it finds the .plaid directory.

Locating the .plaid directory higher up may be useful if you have multiple operations that need access but cannot coexist in the same lower level directory structures.

Optional Paths Specification

If you are using a local Jupyter Notebook installation or operating from command line, it is possible to export data, excel files, and other data as well as reading in local data to dataframes using the helper tools. To do this, a paths.yaml file is necessary.

In addition to the plaid.conf file, create a paths.yaml file. The paths.yaml should be a sibling to the plaid.conf file inside the .plaid directory. It should contain the following path information:

paths:
 PROJECT_ROOT: '{WORKING_USER}/Documents'
 LOCAL_STORAGE: '{PROJECT_ROOT}/local_storage'
 DEBUG: '{PROJECT_ROOT}/local_storage'
 REPORTS: '{PROJECT_ROOT}/reports'
 
 create: []
 local: {}

4 - PlaidLink

PlaidLink provides indirect access to client systems and processes that are protected by firewalls or behind other restrictions that make direct connections from within PlaidCloud difficult. By using a PlaidLink Agent installed within the isolated area, PlaidCloud can request the agent perform actions like running queries, downloading or uploading files, checking sensor conditions, interacting with SAP, and much more.

PlaidLink provides indirect access to client systems and processes that are protected by firewalls or behind other restrictions that make direct connections from within PlaidCloud difficult. By using a PlaidCloud Agent installed within the isolated area, PlaidCloud can request the agent perform actions like running queries, downloading or uploading files, checking sensor conditions, interacting with SAP, and much more.

Since the agent initiates contact with PlaidCloud and communicates over standard HTTPS network protocols, it can normally operate with minimal setup. In addition, the agent can run as an unprivileged user to control access rights within a restricted environment.

4.1 - PlaidLink Agents

Create and manage remote access using lightweight agents

Description

Sometimes it’s necessary and desireable to access data or run processes from a remote system that does not allow external access. This is common in enterprise environments behind firewalls. PlaidCloud allows this ability by using PlaidLink, which enables remote systems access behind a firewall or where direct access from PlaidCloud is not desired.

PlaidLink uses an agent-based system. This means that an agent, the remote user, is installed on a system inside the firewall or other restricted area. The agent can then connect to PlaidCloud by using an outbound initiation process over a secure HTTPS websocket connection. It is as secure as any other encrypted web connection and usually does not require you to open non-standard ports. Before gaining access, the agent must identify itself by sending its agent identifier. From this, if the agent has a successful authentication process, the agent is granted access to the approved operations.

PlaidLink can be installed on Windows, Unix, and Linux systems and can run under low privilege users. On Windows systems, PlaidLink can operate as a Windows Service with full control from the Service panel. On linux or unix systems, it can run as a deamon process.

PlaidLink can also run as a stand-alone Docker container or as a Kubernetes pod.

Managing Agents

To manage agents:

  1. Open Analyze
  2. Select “Tools”
  3. Click “PlaidLink Agents”

This brings you to the PlaidLink Agents Table where you can view, modify, and obtain credentials for the list of available agents.

Creating an Agent

To create an agent:

  1. Open Analyze
  2. Select “Tools”
  3. Click “PlaidLink Agents”
  4. Click “Add PlaidLink Agent”
  5. Complete the required fields
  6. Click “Create”
  7. Assign the agent to the necessary security groups to access resources needed to perform its job
  8. Assign the agent to the necessary Document accounts to access documents needed to perform its job

Obtaining Agent Credentials

To configure PlaidLink agents on the remote system, you must first obtain the agent’s identifying information in order to maintain security. This information includes both a public and a private key.

To obtain these keys:

  1. Open Analyze
  2. Select “Tools”
  3. Click “PlaidLink Agents”
  4. Click the edit icon

This will open a form where you can view the public and private key values.

Regenerating Agent Credentials

It is a good idea to periodically regenerate the public and private keys and update the configuration of remote systems in order to maintain security.

To regenerate the credentials:

  1. Open Analyze
  2. Select “Tools”
  3. Click “PlaidLink Agents”
  4. Click the regenerate icon

Once the credentials have been regenerated, they can be obtained in the same way a new agent’s credentials are obtained (described above).

Enabling and Disabling an Agent

To disable an agent:

  1. Open Analyze
  2. Select “Tools”
  3. Click “PlaidLink Agents”
  4. Uncheck the “Active” checkbox

Running Multiple Agents

PlaidLink is designed to allow operation of multiple agents using a single service installation. Such a streamlined installation system permits one install to handle agents from multiple workspaces and / or agents with different levels of permissions for task execution.

To enable multiple agents, you simply add the agent credentials to the PlaidLink configuration file.

Similar to running multiple agents within one PlaidLink service, it is also possible to run multiple PlaidLink services.

This is sometimes necessary depending on use of system based security or network access restrictions that prevent communication across network boundaries.

Compute, Memory, and Disk Requirements

The PlaidLink service is extremely lightweight and only needs minimal compute and memory to operate. When processing significant data volumes it may be necessary to increase compute resources and especially memory.

Normally, the agent will happily run with 5% of CPU and 200MB of memory. For intense data operations, it is recommended to allocate an entire CPU and at least 4GB of RAM. For dynamic resource allocation systems like Kubernetes, it is fine if the agent has access to burstable resources rather than reserved resources.

Disk space for the agent is minimal too. Agent operations utilize disk space as a data buffer when transferring large amounts of data. Typically, 8GB of space is fine for normal operations. For intense data operations it is recommended that you scale disk up according to the expected data volumes. There is no set amount because it depends on several factors including CPU speed, network speed, amount of data, etc... However, a good place to start is 20GB and adjust from there.

Networking Requirements

The PlaidLink Agent is designed to operate with minimal configuration required. It does not require any special VPN or network configuration other than allowing standard HTTPS network traffic. Agents communicate over the same protocol as normal web browser based traffic.

The agent service always initiates communication with PlaidCloud so there is no need to configure ingress access in firewalls.

4.2 - Installation

Create a configuration file, Install and run the PlaidLink (Agent)

Download the agent

Check the releases on PlaidCloud.com for PlaidLink

Extract the agent

Extract the downloaded zip file to an install location of your choice. Generally, this location will be:

C:\Users\<Username here>\src\plaidlink

Create a configuration file

Copy the config-dist.yaml file in the agent's directory to %ProgramData\plaidcloud\, and rename this copy config.yaml

(Edit this configuration with the values retrieved from PlaidCloud)

Install the agent's service

Run the install_windows_service.bat file in the agent's install directory OR

From an administrator command prompt, navigate to the agent's install directory and run:

.\PlaidLink.exe install

Running the agent

Type Services into Windows' search bar and open the service manager. In the list of services, find PlaidCloud Agent.

Right-click the service and select "Start" to start the agent.

Freezing updates

If at any point you want to disable the agent's auto-update feature, open the agent's 'yaml' configuration file, and at the root level of the file, add a line that reads freeze_updates: true, and restart the agent's service.

4.3 - Configure

Create and maintain PlaidLink (Agent) documentation and account access for optimal database and file system enhancement

The PlaidLink Agent works in conjunction with the PlaidCloud service. The PlaidLink Agent provides the connection necessary to operate with systems not accessible directly such as databases and file systems. The agent performs a number of essential actions including:

  • Reading and writing to databases
  • Reading and writing files to network drives and servers
  • Checking for sensor conditions
  • Interacting with SAP ECC and SAP S/4HANA through Remote Function Calls (RFCs)
  • Interacting with SAP Profitability and Cost Management (PCM)
  • Sending messages and notifications to remote systems

Create an Agent on PlaidCloud

PlaidLink Agent management takes place within the Analyze tab of PlaidCloud. The first step is to create a new PlaidLink Agent instance on PlaidCloud.

  1. Select the Analyze tab
  2. Select the tools menu from the top
  3. Click PlaidLink Agents
  4. Create a new Agent with an appropriate name for the environment or server that it will be installed on for remote operations

To view the Agent public and private keys

  1. Click on the edit icon to view the form
  2. At the bottom of the form you will find the public and private keys that were randomly generated during the Agent creation process

To randomly generate new keys

  1. Click on the Regenerate icon for the Agent record
  2. Once the keys are regenerated, don’t forget to update the agent configuration file with the new keys on the remote server.

Document Account Access

If the agent will need to have access to a Document account for uploading or downloading files, it must be granted permission to access the Document account.

To grant account access

  1. In the Document tab select Manage Accounts
  2. Once the table of accounts appears, click on the agent icon for the account which the new Agent should have upload/download rights
  3. Drag the new agent into the Assigned Agents column
  4. Save the access control form.

Data Connection Access

If the agent will need to have access to a data connection such as a database, it must be granted permission to access the external data connection information.

To grant connection access

  1. In the Analyze tab select the Tools menu
  2. Click External Data Connections
  3. Once the table of data connections appears, click on the agent icon for the connection, which the new Agent should have usage rights
  4. Drag the new agent into the Assigned Agents column and save the access control form.

Follow these Installation Instructions to install PlaidLink on the remote system.

4.4 - Upgrade

Perform a manual upgrade of the PlaidLink Agent installation

A manual upgrade of PlaidLink may be necessary if the agent does not have sufficient privileges to update itself when new versions are released or a manual upgrade process is desired.

Download the agent

Check the releases on PlaidCloud.com for PlaidLink

Stop the Current Agent

Type Services into Windows' search bar and open the service manager. In the list of services, find PlaidCloud Agent.

Right click on the PlaidCloud Agent service and select Stop. Once the service successfully stops, continue on.

Extract the agent

Navigate to the current location of the installed agent.

C:\Users\<Username here>\src\

Rename the current installation folder so that it will no longer be referenced. For example Plaidlink_Old_12122022

Extract the downloaded zip file to an install it in this location. Generally, this location will be:

C:\Users\<Username here>\src\plaidlink

Start the agent

Return to the Services window. Right click on the PlaidCloud Agent service and select Start.

Type Services into Windows' search bar and open the service manager. In the list of services, find PlaidCloud Agent.

Right-click the service and select Start to start the agent. Once the agent shows in the Running state, the agent is now operational again on the new version.

5 - PlaidXL

The PlaidCloud Office Add-in (PlaidXL) allows for focused analysis interactions directly in Microsoft Excel. PlaidXL enables direct interaction with Workspaces, Projects, Workflows, Tables, Views, and Variables in PlaidCloud. Its ability to provide a set of PlaidCloud functions aids in managing data analysis processes directly from Microsoft Excel.

The PlaidCloud Office Add-in (PlaidXL) allows for focused analysis interactions directly in Microsoft Excel. PlaidXL enables direct interaction with Workspaces, Projects, Workflows, Tables, Views, and Variables in PlaidCloud. Its ability to provide a set of PlaidCloud functions aids in managing data analysis processes directly from Microsoft Excel, which makes analysis easier on you.

5.1 - Installation

Install the PlaidCloud Excel Add-in

For Windows

  1. From the Insert > Add-ins menu in Microsoft Excel, type in PlaidCloud in the add-in search box
  2. Select the PlaidCloud Office Add-in and install it

For Mac

  1. From the Insert > Store menu in Microsoft Excel for Mac, type in PlaidCloud in the add-in search box
  2. Select the PlaidCloud Office Add-in and install it

5.2 - Connecting

Connect Excel with the PlaidCloud Excel Add-in to your PlaidCloud project

For PlaidCloud Logins

Connecting to PlaidCloud is much like your login to PlaidCloud directly. You will be asked for your email, password, and any multi-factor authentication code enabled. Fill this out as normal, and begin using PlaidXL!

For Single Sign-on Logins

If you normally use single sign-on to access PlaidCloud, the login process will be transparent for you as long as you are currently logged into your organization. If you are not logged in, you will be prompted to sign in.

5.3 - Working with Data

Retrieve and save data using the PlaidCloud Excel Add-in after connecting to a PlaidCloud project

Retrieve Data

To retrieve data from PlaidCloud, select your desired project from the dropdown menu. Once a project is selected, a list of tables in that project will appear. Click on a table to select it, and click the Retrieve Table button to import the selected table into Excel. The table will be placed in a new worksheet, named after the table. For your convenience, the following will also happen when a table is retrieved:

  • Column headers will be frozen

  • Auto-filters will be enabled

  • An offset-based named range will be generated to encompass the data

    • This range’s name will be the same as the table’s name, prefixed with an underscore and with all spaces replaced by underscores
    • For example, the range for a table named “Sample data” would be “_Sample_data”

Save Data

If you make changes data in the spreadsheet and want to push these changes to the PlaidCloud table, simply press the Save Table (OVERWRITE!) button.

Since you can open multiple PlaidCloud tables in PlaidXL, bulk operations are in place for your convenience. The pull/push all active tables buttons will retrieve the latest versions of all tables active in excel, or upload all active tables back to PlaidCloud, respectively.

In addition, pulling all tables will also refresh any pivot tables that use data from a refreshed table.

6 - Partner Information

Documentation and policies for PlaidCloud partners

Please contact us at info@plaidcloud.com for a list of implementation partners.

7 - Markdown Example Page

PlaidCloud Documentation (this page) uses markdown in its construction. For those who would like to contribute to the development of PlaidCloud Documentation, please contact your representative or send an email to info@plaidcloud.com

This page serves two purposes:

  • Demonstrate how the PlaidCloud documentation uses Markdown
  • Provide a "smoke test" document we can use to test HTML, CSS, and template changes that affect the overall documentation.

Heading levels

The above heading is an H2. The page title renders as an H1. The following sections show H3-H6.

H3

This is in an H3 section.

H4

This is in an H4 section.

H5

This is in an H5 section.

H6

This is in an H6 section.

Inline elements

Inline elements show up within the text of paragraph, list item, admonition, or other block-level element.

Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.

Inline text styles

  • bold
  • italic
  • bold italic
  • strikethrough
  • underline
  • underline italic
  • underline bold
  • underline bold italic
  • monospace text
  • monospace bold

Lists

Markdown doesn't have strict rules about how to process lists. When we moved from Jekyll to Hugo, we broke some lists. To fix them, keep the following in mind:

  • Make sure you indent sub-list items 2 spaces.

  • To end a list and start another, you need a HTML comment block on a new line between the lists, flush with the left-hand border. The first list won't end otherwise, no matter how many blank lines you put between it and the second.

Bullet lists

  • This is a list item
  • This is another list item in the same list
  • You can mix - and *
    • To make a sub-item, indent two spaces.
      • This is a sub-sub-item. Indent two more spaces.
    • Another sub-item.
  • This is a new list. With Hugo, you need to use a HTML comment to separate two consecutive lists. The HTML comment needs to be at the left margin.

  • Bullet lists can have paragraphs or block elements within them.

    Indent the content to be the same as the first line of the bullet point. This paragraph and the code block line up with the first B in Bullet above.

    ls -l
    
    • And a sub-list after some block-level content
  • A bullet list item can contain a numbered list.

    1. Numbered sub-list item 1
    2. Numbered sub-list item 2
    3. Numbered sub-list item 3
    4. Numbered sub-list item 4
    5. Numbered sub-list item 5

Numbered lists

  1. This is a list item
  2. This is another list item in the same list. The number you use in Markdown does not necessarily correlate to the number in the final output. By convention, we keep them in sync.
  1. This is a new list. With Hugo, you need to use a HTML comment to separate two consecutive lists. The HTML comment needs to be at the left margin.

  2. Numbered lists can have paragraphs or block elements within them.

    Indent the content to be the same as the first line of the bullet point. This paragraph and the code block line up with the N in Numbered above.

    ls -l
    
    • And a sub-list after some block-level content. This is at the same "level" as the paragraph and code block above, despite being indented more.

Tab lists

Tab lists can be used to conditionally display content, e.g., when multiple options must be documented that require distinct instructions or context.

Please select an option.

Tabs may also nest formatting styles.

  1. Ordered
  2. (Or unordered)
  3. Lists
echo 'Tab lists may contain code blocks!'

Header within a tab list

Nested header tags may also be included.

Checklists

Checklists are technically bullet lists, but the bullets are suppressed by CSS.

  • This is a checklist item
  • This is a selected checklist item

Code blocks

You can create code blocks two different ways by surrounding the code block with three back-tick characters on lines before and after the code block. Only use back-ticks (code fences) for code blocks. This allows you to specify the language of the enclosed code, which enables syntax highlighting. It is also more predictable than using indentation.

this is a code block created by back-ticks

The back-tick method has some advantages.

  • It works nearly every time
  • It is more compact when viewing the source code.
  • It allows you to specify what language the code block is in, for syntax highlighting.
  • It has a definite ending. Sometimes, the indentation method breaks with languages where spacing is significant, like Python or YAML.

To specify the language for the code block, put it directly after the first grouping of back-ticks:

ls -l

Common languages used in PlaidCloud documentation code blocks include:

  • bash / shell (both work the same)
  • go
  • json
  • yaml
  • xml
  • none (disables syntax highlighting for the block)

Code blocks containing Hugo shortcodes

To show raw Hugo shortcodes as in the above example and prevent Hugo from interpreting them, use C-style comments directly after the < and before the > characters. The following example illustrates this (view the Markdown source for this page).

{{< codenew file="pods/storage/gce-volume.yaml" >}}

To format a link, put the link text inside square brackets, followed by the link target in parentheses. Link to PlaidCloud.com or Relative link to docs.plaidCloud.com

You can also use HTML, but it is not preferred. Link to PlaidCloud.com

Images

To format an image, use similar syntax to links, but add a leading ! character. The square brackets contain the image's alt text. Try to always use alt text so that people using screen readers can get some benefit from the image.

pencil icon

To specify extended attributes, such as width, title, caption, etc, use the figure shortcode, which is preferred to using a HTML <img> tag. Also, if you need the image to also be a hyperlink, use the link attribute, rather than wrapping the whole figure in Markdown link syntax as shown below.

Image used to illustrate the figure shortcode

Pencil icon

Image used to illustrate the figure shortcode

Even if you choose not to use the figure shortcode, an image can also be a link. This time the pencil icon links to the PlaidCloud website. Outer square brackets enclose the entire image tag, and the link target is in the parentheses at the end.

pencil icon

You can also use HTML for images, but it is not preferred.

pencil icon

Tables

Simple tables have one row per line, and columns are separated by | characters. The header is separated from the body by cells containing nothing but at least three - characters. For ease of maintenance, try to keep all the cell separators even, even if you heed to use extra space.

Heading cell 1Heading cell 2
Body cell 1Body cell 2

The header is optional. Any text separated by | will render as a table.

Markdown tables have a hard time with block-level elements within cells, such as list items, code blocks, or multiple paragraphs. For complex or very wide tables, use HTML instead.

Heading cell 1Heading cell 2
Body cell 1Body cell 2

Visualizations with Mermaid

You can use Mermaid JS visualizations. The Mermaid JS version is specified in /layouts/partials/head.html

{{< mermaid >}}
graph TD;
  A-->B;
  A-->C;
  B-->D;
  C-->D;
{{</ mermaid >}}

Produces:

graph TD; A-->B; A-->C; B-->D; C-->D;
{{< mermaid >}}
sequenceDiagram
    Alice ->> Bob: Hello Bob, how are you?
    Bob-->>John: How about you John?
    Bob--x Alice: I am good thanks!
    Bob-x John: I am good thanks!
    Note right of John: Bob thinks a long<br/>long time, so long<br/>that the text does<br/>not fit on a row.

    Bob-->Alice: Checking with John...
    Alice->John: Yes... John, how are you?
{{</ mermaid >}}

Produces:

sequenceDiagram Alice ->> Bob: Hello Bob, how are you? Bob-->>John: How about you John? Bob--x Alice: I am good thanks! Bob-x John: I am good thanks! Note right of John: Bob thinks a long
long time, so long
that the text does
not fit on a row. Bob-->Alice: Checking with John... Alice->John: Yes... John, how are you?


More examples from the official docs.

Sidebars and admonitions provide ways to add visual importance to text. Use them sparingly.

A sidebar offsets text visually, but without the visual prominence of admonitions.

This is a sidebar.

You can have paragraphs and block-level elements within a sidebar.

You can even have code blocks.

sudo dmesg

Admonitions

Admonitions (notes, warnings, etc) use Hugo shortcodes.

Includes

To add shortcodes to includes.

Katacoda Embedded Live Environment


Embed a custom markdown shortcode section

I'm Only A Test

Don't adjust you TV, this is only a test.