Date of Release: March 2023
This section consists of the new features and enhancements introduced in this release. |
This section consists of the resolved issues in this release:
| JIRA ID | Issue |
|---|---|
| IPD-21003 | During the pipeline build, Infoworks is unable to read the timestamp in the BigQuery output. |
| IPD-20921 | The backlog jobs gets stuck in pending state and do not progress resulting in other jobs also getting blocked. |
| IPD-21022 | User is unable to remove audit columns from Infoworks pipeline target in Snowflake CDW environment. |
| IPD-20934 | Infoworks is using information_schema.tables instead of information_schema.schemata for schema check. |
| IPD-20933 | When user configures incremental ingestion, the job fails with error records. |
| IPD-21010 | The Append and Merge sync modes in pipeline build for the Snowflake CDW environments are running create database if not exists command. |
| IPD-21060 | User is unable to create/edit source for the Operations Analyst, Database Admin, and Data Modeller role via API. |
| IPD-20864 | The storage client secrets are visible in plain text in Infoworks log for ephemeral clusters. |
| IPD-21039 | The fixed width sources expects newly added columns to be of Source type instead of Target type. |
| IPD-21107 | The Delete mode in the pipeline does not have the reference table option. |
| IPD-21011 | The append mode fails when column order is different to the target table. |
| IPD-21196 | Infoworks validation expects the pipeline to have filter node/join node when it is in the delete mode. |
| IPD-19400 | A large number of jobs in BLOCKED state would prevent execution of new jobs. This would lead to the new jobs being in PENDING state for a long time. |
| IPD-19339 | Despite cluster creation getting completed, the Creating Cluster timer duration keeps increasing. |
| IPD-19545 | The list of data connections and GET data connections APIs are accessible only to Admin users. |
| IPD-19720 | User is unable to read Snowflake warehouse name in config-migration APIs. |
| IPD-19773 | APIs are failing since private_key_file_details field is stored in JSON format instead of array format. |
| IPD-19751 | Infoworks does not disable query caching while fetching schema from BigQuery. |
| IPD-19801 | The job summary is not provided as a part of the Job Status API response. |
| IPD-19766 | For BigQuery export files, Sync to Target is failing when the table schema contains array type. |
| IPD-19810 | User is able to set multiple default computes using API calls. |
| IPD-19821 | When service credential used in BigQuery target is different than service credentials used to create environment , sync to target fails with “Invalid JWT Signature” error. |
| IPD-19854 | For the Streaming sources, the Table Configuration Translator shows an error while saving advanced configuration. |
| IPD-19853 | If number of characters in table name exceed 27 characters, then export to teradata is failing with "table_name_temp already exist" error. |
| IPD-19815 | There are incorrect log messages in Sync to Target for teradata job in 5.3. |
| IPD-19900 | The zip file downloaded from Application Logs does not contain cluster logs. |
| IPD-19945 | Infoworks does not fetch the correct datatypes for the CDATA sources. |
| IPD-19929 | In few scenarios, Upgrading from 5.3.0 to 5.3.0.5 crashes the Ingestion service. |
| IPD-19943 | There is no provision to configure the disk space for the Dataproc clusters. |
| IPD-20022 | Despite disabling the dataset creation in the pipeline configuration, the pipeline still creates the schema. |
| IPD-20087 | The Clustering columns are missing in the Target BigQuery table. |
| IPD-20082 | The API does not populate ClusterID for the Ephemeral cluster. |
| IPD-20397 | Pipeline build fails when the source table column has trailing "%". |
| IPD-20371 | While configuring the BigQuery target, the columns are getting ordered alphabetically irrespective of the order user chooses. |
| IPD-20570 | The "Add tables to crawl" API is not working for BigQuery Sync source |
| IPD-20455 | The dt advanced configurations to merge partitions is not taking effect in the pipeline job dt_batch_spark_coalesce_partitions. |
| IPD-20670 | Google has changed the return message for exception handling of autoscaling policies resulting in job failure. |
| IPD-20752 | The iw_environment_cluster_policy configuration does not take effect for ephemeral clusters. |
| IPD-20931 | The Save and Save & Add Another buttons are not working in the aggregate node. |
| IPD-20936 | The Preview Data tab on the Infoworks pipeline fails to load data sometimes. |
| IPD-20949 | Upgrade from 5.3.1 to 5.3.1.5 is failing due to invalid image reference as there is a change in the image format in the templates resulting in pod failure. |
The following section contains known issues that Infoworks is aware of, and is working on a fix in an upcoming release:
| JIRA ID | Issue |
|---|---|
| IPD-21264 | REST API does not support config migration for pipeline when custom audit columns are added in the advanced configuration section. |
| IPD-21161 | Ingestion job fails for MongoDB table when there is timestamp field in nested columns. |
| IPD-20820 | The Auth API is not working as expected if restricted_visibility_mode flag or user role is changed. It starts working again either after default cache expiry time of 15 minutes or the user-configured expiry time. |
For Kubernetes-based installation, refer to Infoworks Installation on Azure Kubernetes Service (AKS).
For more information, contact support@infoworks.io.
For upgrading to 5.4 Kubernetes, refer to Upgrade to 5.4.0 Kubernetes.