Configure Source and Target -> Select Tables -> Configure Synchronization -> Onboard Data |
This section explains how to onboard data from a RDBMS source, to the data lake by configuring the compute environment and base location.
Before initiating the onboarding process, you must ensure to define an Environment. For more information on how to define the Environment, see Managing Environments section.
The Infoworks onboarding process includes the following steps:
After defining the compute environment, you can set up a RDBMS data source for ingestion, that includes configuring the connection URL, user credentials, and target location where you want to onboard the data.
To define the source and target, you must perform the following steps:
Now, you can configure the source connection properties and select the data environment where you want to onboard the data.
Configure the following fields:
Source Fields and Description | |
---|---|
Source Field | Description |
Source Name | A name for the source that will appear in the Infoworks User Interface. The source name must be unique and must not contain space or special characters except underscore. For example, Customer_Details. |
Fetch Data Using | The mechanism through which Infoworks fetches data from the database. E.g. JDBC |
Connection URL | The connection URL through which Infoworks connects to the database. For details on connection URL, refer to the individual RDBMS source sections. |
Username | The username for the connection to the database. |
Authentication Type for Password | Select the authentication type from the dropdown. For example, Infoworks Managed or External Secret Store. If you select Infoworks Managed, then provide Authentication Password for Password. If you select External Secret Store, then select the Secret which contains the password. |
Snowflake Warehouse | Snowflake warehouse name. Warehouse is pre-filled from the selected snowflake environment and this field is editable. |
Custom Tags | This dropdown provides the list of tags which you can choose. It can be used to identify/segregate the source based on the selected tags. |
Target Fields and Description | |
---|---|
Target Field | Description |
Data Environment | Select the data environment from the drop-down list, where the data will be onboarded. |
Storage | Select from one of the storage options defined in the environment. |
Base Location | The path to the base/target directory where all the data should be stored. |
Catalog Name | The catalog name of the target. |
Staging Catalog Name | The staging catalog name for temp tables. |
Staging Schema Name | The staging schema name for temp tables. |
Schema Name | The schema name of the target. |
Snowflake Database Name | The database name of the Snowflake target. |
Staging Schema Name | The name of the schema where all the temporary tables (like CDC, segment tables etc) managed by Infoworks will be created. Ensure that Infoworks has ALL permissions assigned for this schema. This is an optional field. If staging schema name is not provided, Infoworks uses the Target Schema provided. |
Use staging schema for error tables | Click this checkbox to create history tables in the staging schema. |
BigQuery Dataset Name | Dataset name of the BigQuery target. |
Staging Dataset Name | The name of the dataset where all the temporary tables (like CDC, segment tables, and so on) managed by Infoworks will be created. Ensure that Infoworks has ALL permissions assigned for this dataset. This is an optional field. If staging dataset name is not provided, Infoworks uses the Target dataset provided. |
Use staging dataset for error tables | Click this checkbox to create history tables in the staging dataset. |
Make available in infoworks domains | Select the relevant domain from the dropdown list to make the source available in the selected domain. |
You may choose to click the Save and Test Connection button to save the settings and ensure that Infoworks is able to connect to the source system.
Click Next to proceed to the next step.
There are two ways in which tables can be selected, browsing the source to choose the tables and configuring a custom query.
The tables are automatically set to Full refresh by default.
To modify the configurations and synchronize the table that is onboarded:
After metadata crawl is complete, you have the flexibility to add a target column to the table.
Target Column refers to adding a target column if you need any special columns in the target table apart from what is present in that source.
You can select the datatype you want to give for the specific column
You can select either of the following transformation modes: Simple and Advanced
Simple Mode
In this mode, you must add a transformation function that has to be applied for that column. Target Column with no transformation function applied will have null values in the target.
Advanced Mode
In this mode, you can provide the Spark expression in this field. For more information, refer to Adding Transform Derivation.
The final step is to onboard the tables. You can also schedule the onboarding so that the tables are periodically synchronized.
Fields | Description |
---|---|
Table Group Name | The name of the table group that is onboarded. |
Max. Parallel Tables | The maximum number of tables that can be crawled at a given instance. |
Max Connections to Source | The maximum number of source database connections allocated to this ingestion table group. |
Compute Cluster | The compute cluster that is spin up for each table. |
Overwrite Worker Count | The option to overwrite the minimum and maximum worker values configured in the compute template. |
Number of Worker Nodes | The number of nodes that can be spun up in the cluster. |
Snowflake Warehouse | Snowflake warehouse name. For example, TEST_WH. |
Onboard tables immediately | Select this check box to onboard the tables immediately. |
Onboard tables on a schedule | Select this check box to onboard the tables on a schedule at a later point in time. |
Click Onboard Data at the bottom right of the screen to onboard the data.
On the Success message pop-up, click View Data Catalog to onboard additional data or click on View Job Status to monitor the status of the onboarding job submitted.
Refer the sections below, if you want to configure additional parameters for the source.
You can set additional connection parameters to the source as key-value pairs. These values will be used when connecting to the source database.
To add additional connection parameters:
For configuring a RDBMS source table, see Migrating Source Configurations
To add a source extension to process the data before onboarding, see Adding Source Extension
To set the Advanced Configurations at the source level, see Setting Source-Level Configurations.
You can notify the Subscribers for the ingestion jobs at the source level. To configure the list of subscribers, see Setting Ingestion Notification Services
Click Delete Source to delete the source configured.
To onboard more tables from the same data source, follow these steps.
configurration use_default_ub_query
to false at table/source level which uses GREATEST() function.