Infoworks 5.4.5
Onboard Data

Onboarding Data from Snowflake Source

Overview

Infoworks supports ingesting data from Snowflake database in a scalable and parallelized way.

Creating a Snowflake Source

For onboarding data from a Snowflake source, see Onboarding an RDBMS Source. Ensure that the Source Type selected is Snowflake.

NOTE Selecting a Snowflake data environment for onboarding data from a Snowflake source is not supported.

Snowflake Configurations

FieldDescription
Fetch Data UsingThe mechanism through which Infoworks fetches data from the database. Currently, Infoworks supports fetching data using Spark.
Connection URLThe URL of the Snowflake account through which Infoworks connects to the database.
Account NameName of Snowflake account.
Authentication TypeSelect the type of authentication. This is a mandatory field. In the dropdown, select Default or OAuth.
UsernameUsername of Snowflake account. Provide user's Snowflake username required to connect to Snowflake. This field appears only when the Authentication type is Default.
Authentication Type for Password

Select the authentication type from the dropdown. For example, Infoworks Managed or External Secret Store.

If you select Infoworks Managed, then provide Authentication Password for Password.

If you select External Secret Store, then select the Secret which contains the password.

OAuth ServiceInfoworks supports OAuth service provided by Snowflake and Azure AD as external authorization provider. Select the required OAuth service. Choose from Snowflake or Azure Ad.
WarehouseName of the Snowflake warehouse. This field is mandatory
Data Environment NameEnvironment defines where and how your data will be stored and accessed. Environment name must help the user to identify the environment being configured. User-defined. Provide a meaningful name for the environment being configured.
DescriptionDescription for the environment being configured. User-defined. Provide required description for the environment being configured.
Client IDIn Azure AD: This is the ID for the application registered as Snowflake Client in Azure AD. In Snowflake: This is the public identifier for the security integration created in Snowflake. Provide the application id as applicable.
Authentication Type for Client Secret

In Azure AD: The confidential secret for the application registered as Snowflake Client in Azure AD. In Snowflake: The confidential secret to connect to the Snowflake account. Provide the secret corresponding to the client ID above.

If you select Infoworks Managed from the Authentication Type for Client Secret dropdown, then provide Authentication Password for Client Secret.

If you select External Secret Store, select the secret from Secret for Client Secret dropdown.

User NameSnowflake service account user. Provide the account user details.
ScopeA scope is a way to limit the permitted actions to a particular set of resources as part of a role assignment. Provide the scope as defined in Azure AD. This field appears when the OAuth Type selected is Azure AD
Token End Point URLEnd point to invoke for getting access token from the Azure AD. Provide the end point url.
Authentication Type for Refresh Token

If included in the authorization URL, Snowflake presents the user with the option to consent to offline access. In this context, offline access refers to allowing the client to refresh access tokens when the user is not present. With user consent, the authorization server returns a refresh token in addition to an access token when redeeming the authorization code. Provide a refresh token.

If you select Infoworks Managed, then provide Authentication Password for Refresh Token.

If you select External Secret Store, then select the secret from the Secret for Refresh Token dropdown

Section: Additional ParametersClick the Add button to provide the parameters in key-value pair. Provide additional parameters required to connect to Snowflake. It is non mandatory.
Section: Session ParametersClick the Add button to provide the parameters in key-value pair. Provide session parameters required to connect to Snowflake. It is non mandatory.

Once the settings are saved, you can test the connection.

Configuring a Snowflake Table

NOTE Columns with array and struct datatypes are not supported.

With the source metadata in the catalog, you can now configure the table for CDC and incremental synchronization.

Step 1: Click the Configuration link, for the desired table.

Step 2: Provide the ingestion configuration details.

FieldDescription
Query

The custom query based on which the table has been created.

NOTE This field is only visible if the table is ingested using Add Query as Table.

Ingest TypeThe type of synchronization for the table. The options include full refresh and incremental.
Natural Keys

The combination of keys to uniquely identify the row. This field is mandatory in incremental ingestion tables. It helps in identifying and merging incremental data with the already existing data on target.

NOTE At least one of the columns in the natural key must have a non-null value for Infoworks merge to work.

Incremental ModeThe option to indicate if the incremental data must be appended or merged to the base table. This field is displayed only for incremental ingestion. The options include append and merge.
Incremental Fetch MechanismThe fetch mechanism options include Archive Log and Watermark Column. This field is available only for Oracle log-based ingestion.
Watermark ColumnSelect single/multiple watermark columns to identify the incremental records. The selected watermark column(s) should be of the same datatype.
Enable Watermark Offset

For Timestamp and Date watermark columns, this option enables an additional offset (decrement) to the starting point for ingested data. Records created or modified within the offset time period are included in the next incremental ingestion job.

NOTE Timestamp watermark column has three options: Days, Hours and Minutes, and the Date watermark column has Days option. In both the cases, the options will be decremented from the starting point.

Ingest subset of dataThe option to configure filter conditions to ingest a subset of data. This option is available for all the RDBMS and Generic JDBC sources. For more details, see Filter Query for RDBMS Sources

Target Configuration

Configure the following fields:

FieldDescription
Target Table NameThe name of the target table.
Storage FormatThe format in which the tables must be stored. The options include Read Optimized (Delta), Read Optimized (Parquet), Read Optimized (ORC), Write Optimized (Avro).
Partition Column

The column used to partition the data in target. Selecting the Create Derived Column option allows you to derive a column and then use that as the partition column. This option is enabled only if the partition column datatype is date or timestamp.

Provide the Derived Column Function and Derived Column Name. Data will be partitioned based on this derived column.

Advanced Configurations

Following are the steps to set advanced configuration for a table:

Step 1: Click the Data Catalog menu and click Ingest for the required source.

NOTE For an already ingested table, click View Source, click the Tables tab, click Configure for the required table and click the Advanced Configuration tab.

Step 2: Click the Configure Tables tab, click the Advanced Configuration tab and click Add Configuration.

Step 3: Enter key, value, and description. You can also select the configuration from the list displayed.

Sync Data to Target

Using this option, you can configure the Target connections and sync data as described in the section Synchronizing Data to External Target.

The following are the steps to sync data to target.

Step 1:From the Data Sources menu, select one of the tables and click View Source/Ingest button.

Step 2: Select the source table to be synchronized to Target.

Step 3: Click the Sync Data to Target button.

Step 4: Enter the mandatory fields as listed in the table below:

FieldDescription
Job NameThe name of the ingestion job.
Max Parallel TablesThe maximum number of tables that can be crawled at a given instance.
Compute ClusterThe template based on which the cluster will spin up for each table.The compute clusters created by admin and are accessible by the user are listed in the drop down.
Overwrite Worker CountThe option to override the maximum and minimum number of worker node values as configured in the compute template
Number of Worker NodesThe number of worker nodes that will spin up in the cluster.
Save as a Table GroupThe option to save the list of tables as a table group.

Click Onboarding an RDBMS Source to navigate back to complete the onboarding process.

  Last updated