Date of Release: April 2023
JIRA ID | Issue |
---|---|
IPD-20373 | Infoworks provides the REST APIs to create and delete the file mappings for the Mainframe sources. The sample curl request can be found here |
IPD-20276 | Infoworks now supports ingestion from Variable Block (VB) Mainframe Files. To add variable block mainframe file-mapping, you can select the same from the Record Type dropdown. |
IPD-20175 | Infoworks can now ingest Copybook files which have incorrect indentation. You can change the value of comment_upto_char (Default Value: 6) and comment_after_char (Default Value: 6). |
IPD-20173, IPD-20097, and IPD-20088 | Infoworks supports ingesting all types of mainframe files with filter. |
IPD-20099 | Infoworks supports ingestion Copybook files which have FILLER as a column name. |
IPD-20098 | Infoworks supports flattening of complex datatypes. |
IPD-20685 | Infoworks supports to register and de-register HIVE/Metastore UDFs. Following are the key-value pairs to register/de-register.
hive_udfs_to_register=
|
JIRA ID | Issue |
---|---|
IPD-21571 | Back ticks (`) are not getting preserved in pipelines while importing the SQL query. |
IPD-21574 | The API to create Query as a Table is not working in the BigQuery environment. |
IPD-21423 | Pipeline build fails with "Unable to find table metadata for node" error message. |
IPD-20206 | If user tries to stop the Orchestrator using stop.sh script, it stops all the components except the Orchestrator Engine Worker. |
IPD-20921 | The backlog jobs gets stuck in pending state and do not progress resulting in other jobs also getting blocked. |
IPD-21003 | During the pipeline build, Infoworks is unable to read the timestamp in the BigQuery output. |
IPD-21093 | For a fixed length file, the first column gets moved to the last during ingestion. |
IPD-20889 | In a TPT-based teradata table, there is a discrepancy between the actual job run time and the job duration shown on the Job Metrics page. |
IPD-20979 | The BigQuery pushdown attempts to read and write to the specified parent project instead of the configured project in the environment. |
IPD-20087 | The Clustering columns are missing in the Target BigQuery table. |
IPD-20536 | Data Analyst and Data Modeller have permissions to preview the data in pipeline, view sample data, and generate sample data when configuring a table. |
IPD-20022 | Despite disabling the dataset creation in the pipeline configuration, the pipeline still creates the schema. |
IPD-19943 | There is no provision to configure the disk space for the Dataproc clusters. |
IPD-19929 | In few scenarios, Upgrading from 5.3.0 to 5.3.0.5 crashes the Ingestion service. |
IPD-19945 | Infoworks does not fetch the correct datatypes for the CDATA sources. |
IPD-19821 | When service credential used in BigQuery target is different than service credentials used to create environment , sync to target fails with “Invalid JWT Signature” error. |
IPD-19766 | For BigQuery export files, Sync to Target is failing when the table schema contains array type. |
IPD-19853 | If number of characters in table name exceed 27 characters, then export to teradata is failing with "table_name_temp already exist" error. |
IPD-19751 | Infoworks does not disable query caching while fetching schema from BigQuery. |
IPD-19815 | There are incorrect log messages in Sync to Target for teradata job in 5.3. |
IPD-19474 | The API POST call to Pipeline Config-Migration fails with generic error. |
IPD-19545 | The list of data connections and GET data connections APIs are accessible only to Admin users. |
IPD-19542 | When running ingestion on BigQuery environment, error table is not getting created on BigQuery dataset if the source has only one error record. |
IPD-19339 | Despite cluster creation getting completed, the Creating Cluster timer duration keeps increasing. |
IPD-19663 | The Workflows were failing due to the request timeout. Hence, they go directly to failed state without executing any of the tasks. |
IPD-19701 | The API call to trigger Sync to Target for the table group configured with target data connection fails. |
IPD-19753 | For JSON and streaming sources, if CDC data has column with empty values, it is marked as an error record. |
IPD-20202 | If you manually change the datatype for a column after the metacrawl, Mainframe ingestion fails. |
IPD-20174 and IPD-20172 | If a COBOL layout file does not have header field, Infoworks unable to crawl/ingest EBCDIC file. |
IPD-20684 | Google has changed the return message for exception handling of autoscaling policies resulting in job failure. |
IPD-20570 | The "Add tables to crawl" API is not working for BigQuery Sync source. |
IPD-20455 | The dt advanced configurations to merge partitions is not taking effect in the pipeline job dt_batch_spark_coalesce_partitions . |
IPD-20432 | The workloads were running in the Compute project rather than the Storage project where datasets are persisted. |
IPD-20397 | Pipeline build fails when the source table column has trailing "%". |
IPD-20207 | Data analyst and Modeler are unable to crawl metadata. |
IPD-20371 | While configuring the BigQuery target, the columns are getting ordered alphabetically irrespective of the order user chooses. |
Step 1: Stop all running jobs.
Step 2: Change the directory to tmp folder.
xxxxxxxxxx
cd /tmp
Step 3: Change the user to infoworks
.
xxxxxxxxxx
su infoworks
Step 4: To download the tar file, execute the following command.
xxxxxxxxxx
wget https://iw-saas-setup.s3.us-west-2.amazonaws.com/5.3/patch/infoworks-5.3.0.13-ubuntu2004.tar.gz
Step 5: Extract the tar files.
xxxxxxxxxx
tar -xzf infoworks-5.3.0.13-ubuntu2004.tar.gz
Step 6: Set the IW_HOME environment variable.
xxxxxxxxxx
export IW_HOME=<IW_HOME_PATH>
Step 7: Create a backup directory.
xxxxxxxxxx
mkdir backup_040423
Step 8: Move the REST API and DT files to the backup folder.
xxxxxxxxxx
mv $IW_HOME/platform/rest-api-service /tmp/backup_040423/rest-api-service.bak -r
mv $IW_HOME/lib/dt/jars/dt-commons.jar /tmp/backup_040423/dt-commons.jar.bak
Step 9: Copy the patch files to the respective original directories.
xxxxxxxxxx
cp /tmp/infoworks/platform/rest-api-service $IW_HOME/platform/rest-api-service
cp /tmp/backup_040423/rest-api-service.bak/.env $IW_HOME/platform/rest-api-service
cp /tmp/backup_040423/rest-api-service.bak/ecosystem.config.js
$IW_HOME/platform/rest-api-service
cp /tmp/infoworks/lib/dt/jars/dt-commons.jar $IW_HOME/lib/dt/jars/dt-commons.jar
Step 10: Restart the RESTAPI and DT services now.
xxxxxxxxxx
$IW_HOME/bin/stop.sh restapi dt && $IW_HOME/bin/start.sh restapi dt
To go back to previous checkpoint version:
Step 1: Stop all running jobs.
Step 2: Change the directory to tmp folder.
xxxxxxxxxx
cd /tmp
Step 3: Change the user to infoworks
.
xxxxxxxxxx
su infoworks
Step 4: Set the IW_HOME environment variable.
xxxxxxxxxx
export IW_HOME=<IW_HOME_PATH>
Step 5: Remove patch specific files and folders.
xxxxxxxxxx
rm -r $IW_HOME/platform/rest-api-service
rm $IW_HOME/lib/dt/jars/dt-commons.jar
Step 6: Move the files and folders from the backup directory to the respective original directories.
xxxxxxxxxx
mv /tmp/backup_040423/rest-api-service.bak $IW_HOME/platform/rest-api-service -r
mv /tmp/backup_040423/dt-commons.jar.bak $IW_HOME/lib/dt/jars/dt-commons.jar
Step 7: Restart the RESTAPI and DT services.
xxxxxxxxxx
$IW_HOME/bin/stop.sh restapi dt && $IW_HOME/bin/start.sh restapi dt