Infoworks now automates onboarding of data directly to Snowflake and supports data transformation in Snowflake (SQL pushdown). Infoworks can orchestrate data management across hybrid environments including Snowflake and data lakes.
| JIRA ID | Issue |
|---|---|
| IPD-17862 | Sync to target on Databricks interactive cluster may fail if both ingestion and pipeline are using same interactive cluster to run. To resolve the issue, refer to this document. |
| IPD-17596 | An error appears sometimes, when the Preview Data tab of a Snowflake pipeline is switched to a different one (such as, Dataproc) or vice-versa. To resolve this: a. Close the Preview Data tab, and open it again. b. If the prior resolution does not work, restart the DT services. |
| IPD-17907 | Pipeline build notification is not working for pipelines built on Snowflake environment. |
| IPD-17906 | Reference pipeline table state is not updated to "Crawling" during the first pipeline build. |
| JIRA ID | Issue | Severity |
|---|---|---|
| IPD-16684 | Ingestion job is not merging "pending cdc" tables when there are no incremental records for the current run. | Highest |
| IPD-16918 | Sync to target to postgres is failing for tables in incremental mode. | Highest |
| IPD-16922 | Ingestion from SQL server jobs are failing while converting date and/or time from character string. | Highest |
| IPD-16958 | Infoworks Test connection job is failing with enum constant error on Google Ads Connector. | Highest |
| IPD-17318 | Ingestion service in Prod and Dev environment is crashing frequently. | Highest |
| IPD-17370 | Fetch metadata API picks header rows count as 1 by default even when passed with 0 in body. | Highest |
| IPD-16703 | Pipeline export configuration API is not setting/inserting key is_existing_dataset into metadata. | High |
| IPD-16650 | Custom Audit columns not getting added on pipeline created through SQL-Import API. | High |
| IPD-16759 | Pipeline with BigQuery target fails when the decimal datatype scale is greater than 9. | High |
| IPD-16800 | Hive Arraystring column not getting ingested when exporting a table from a Hive metadata sync source to BigQuery. | High |
| IPD-16917 | Primary Key and Indexes are missing after export from Infoworks 5.0 to Postgres. | High |
| IPD-16924 | In 5.0 sync to Postgres target, enclosing the table name and column names in quotes while executing DDL causes them to become case sensitive in Postgres DB. | High |
| IPD-16947 | Metadata crawl on hive metadata sync source is not working using API. | High |
| IPD-16984 | Change in export configurations is overwriting existing target table | High |
| IPD-16971 | In TPT based Teradata ingestion, millisecond precision is not getting stored in mongo key last_ingested_cdc_value. | High |
| IPD-16972 | BigQuery limitation on source URIs where it allows only 10K part files | High |
| IPD-16975 | Error message "Cannot read property 'toHexString' of null" appears when trying to import a source json file on a new source using config migration option. | High |
| IPD-16994 | Sync table schema page in pipeline is not responding when the number of columns for the table is above 1200. | High |
| IPD-17190 | Error message "table id doesn't exist in the source" appears while adding the table to table group. | High |
| IPD-17077 | Refresh token for the user, created from API flow is not working. | High |
| IPD-17418 | TPT ingestion jobs for Teradata VIEWs are randomly failing with an error indicating the length of a received record is greater than the defined length in the TPT script. | High |
| IPD-16900 | Data Transformation interactive jobs are failing with connection refused error. | Medium |
For the Installation procedure, see Infoworks Installation on Azure , Infoworks Installation on AWS, and Infoworks Installation on GCP.
For more information, contact support@infoworks.io.
For upgrading from lower versions to 5.2.0, see Upgrading to 5.2.0.
For the PAM, see Product Availability Matrix.