Infoworks Release Notes
Release Notes

v6.1.3

Date of Release: May 2025

This section consists of the new features and enhancements introduced in this release.

  • Custom Tags and Extensions Management - Starting with version 6.1.3, Operations Analyst (OA) users will gain the ability to manage custom tags and extensions, a feature that was previously restricted to administrators.
  • Saved Filter Presets - Introduced functionality allowing users to create and manage multiple saved filter presets at the individual user level within the Ops dashboard. This enhancement enables streamlined access to frequently used filter configurations, eliminating the need for repeated manual setup. Users can manage their saved filter presets through the "My Profile" settings section.
  • Case Sensitive Target Tables - From version 6.1.3, support for case-sensitive column names has been introduced for external/export pipeline targets, allowing column case retention. Please read Feature Summary section below.
  • Snowflake Authentication Support - Added support for OAuth and key-pair authentication for Snowflake sources, sync to target Snowflake, and pipeline export to Snowflake.
  • Azure File Share - Added support to load data files from Azure File Share for file sources (CSV, Fixed Width, JSON, Mainframe).
  • Email Summary Notifications (Beta Feature) - Introduced email summary notifications for workflow runs, with options to attach failed task logs and a summary. These can be enabled via checkboxes in the 'Send Notification' task properties.
  • Override Parameters - Pipeline version parameters can now be overridden at build time and Workflow parameters can now be overridden at build time using a custom run option.
  • Workflow monitoring page - Introduced average run time and duration metrics on the workflow monitoring page, along with task and job logs for each applicable workflow task. Additionally, introduced color-coded indicators on the timeline chart to facilitate easier debugging.
  • Audit Trail Enhancement - Audit trail now captures created, updated, and deleted field records, including before-and-after values for updates.
  • Key Pair Authentication - Users can now authenticate with Snowflake using Key-Pair authentication, as an alternative to traditional OAuth methods or user-password credentials.
  • Query Parameters Support - Query parameters enable the use of custom variables in Query-As-A-Table, which can be injected during ingestion. Their values can be overridden at both table group and workflow levels.
  • Source Catalog Filtering (Beta Feature) - Introduced a source catalog field to limit assets in Hive and Delta Metasync sources within Unity environments.
  • Pre-Execution Hook Support - Added support for pre-execution hooks in advanced compute configuration to run custom classes (e.g., for encryption) before ingestion and transformation jobs.

Feature Summary: Case-Sensitive Column Names in External Targets

What has changed: In this release, enhancements have been made to support case-sensitive column names for external targets, specifically targeting use cases like CosmosDB where column casing is preserved and relevant. As part of this change, explicit case- conversions in the common code were removed, and case-handling logic was refactored to accommodate both regression scenarios and native target-specific behaviour in target writers.

Why this was needed: Up until version 6.1.2, the platform did not support case-sensitive column names for targets, due to a default behaviour of normalizing column names to lowercase. This posed a limitation for scenarios where case sensitivity in column names is critical.

What customers should be aware of during upgrade: While this enhancement maintains backward compatibility, it required significant refactoring of the underlying codebase. As part of stabilization, the team invested additional cycles to validate behaviour across a wide range of scenarios and attempted to account for edge cases. However, given the broad feature impact of this change, to mitigate any indirect impacts, customers are advised to take extra precautions during this upgrade, such as:

  • Backup metadata: It's recommended to take a backup of existing metadata before upgrading, as a best practice.
  • Review downstream dependencies: If any downstream scripts, integrations, or tools rely on specific column name casing, review and validate them post-upgrade to ensure consistency and avoid unintended behaviour.
  • Use cases testing with data validation should be performed on lower/Dev environment before deploying 6.1.3 in production.

Resolved Issues

This section consists of the resolved issues in this release:

JIRA IDIssue
IPD-27514Ingestion Job's failing with Unrecognized field "run_duration"
IPD-27580Issue with Snowflake Oauth
IPD-27616Make unity cluster libraries in allow list configurable
IPD-27613Databricks compute update doesn't update permissions, restart api restarts compute multiple times
IPD-27089Pipelines list is giving Empty Result for a Domain
IPD-27673Unable to add snowflake profile & warehouse for pipeline groups
IPD-26338Issue with sql pipeline when used '--' in the query
IPD-27707Step id is not updated in the job_drivers document while updating the job driver heartbeat
IPD-27294Transformation nodes sometimes disappear when being dragged onto the canvas
IPD-27494Ephemeral cluster jobs on tables and segments do not show Stages or Cluster ID in UI
IPD-27518Additional BashNode's configuration values
IPD-27758API Error : Request body must have 'storage_id'
IPD-27884Patch API to update max_modified_timestamp key not working
IPD-27889Some Sources not visible under Favourites when we starred the source
IPD-27122CSV Ingestion job failing with SFTP source
IPD-27948Users are able to access data environments of all other projects/domains
IPD-28104Workflow import failing when migrating using config-migration on workflow settings page

Known Issues

  • Incorrect preview data is observed when the filter condition is modified via the pipeline version parameter.
  • In the onboarding flow, the target table name accepts duplicates when the case does not match.
  • Ingestion fails in the Unity Catalog environment when the target catalog name contains a '-' character.
  • Able to view information_schema and its tables for a metadata sync source in a Unity Catalog environment.
  • Ingest job is failing for JSON table when any column is excluded.
  • Creation of a new generic JDBC extension fails when specifying the folder location.
  • In preview data, SQL queries are ran by the previously selected profile.
  • Parallel segmented load ingestion fails when history is enabled.
  • The target table in a SQL pipeline created through SQL import is unavailable as a reference table.
  • Insert Overwrite mode is not supported for spark native targets in transformation pipelines.
  • The SCD2 merge audit columns are not being updated correctly for referenced tables in Snowflake and Datalake environments.
  • In CosmosDB Target , the field 'Unique Key' is not working as expected.
  • Oracle Ingestion Jobs failing due to Data Quality Validation Issues on Snowflake with Azure 11.3 Cluster.
  • Segment load jobs are reporting incorrect row counts during ingestion in the Snowflake environment.
  • Workflow parameters are incorrectly appearing as the Pipeline Parameters in the Pipeline Node, despite no parameters being defined.
  • Pipeline builds with snowflake target are failing after upgrade, if case sensitivity was enabled on lower version and there is a mismatch in the natural key column case [setting dt_case_sensitive_schema_alignment=TRUE].
  • Ingestion for tables created using query as a table with incremental mode insert overwrite will fail when the column names have space in it for a CDC job.
  • Ingestion for tables created using query as a table with incremental mode insert overwrite and derived splitby configured will fail for a CDC job.
  • Preview Data in custom target node in a pipeline is not supported.
  • Streaming jobs that have been stopped may show running state for the cluster job, users can verify that the job is actually stopped by observing that the number of batches run for that job does not increase after stopping it; more details here.
  • Micro batches processing stop for streaming ingestion if the source stops streaming the data for Databricks runtime version 14.3.
  • Pipeline build failing when read from merged/deduplicated table is selected.
  • Pipeline node preview data request times out for initial few tries.

Limitations

  • Discrepancy while performing 'EDIT' configuration for 'Authentication Type' for a persistent cluster : The user is unable to change the authentication type for an existing cluster. Note: The cluster creator cannot be changed once created. Updating authentication details with different user credentials will not affect the creator. To change the creator, the cluster must be deleted from the Databricks console and recreated from Infoworks.
  • CDC SCD2 pipeline build are failing intermittently on 6.1.0 unity environment.
  • When the table is ingested using Databricks 14.3 and in the pipeline when we check for preview data either with 11.3 or 14.3, API gets timed out.
  • Streaming is not supported on a shared cluster.
  • If the target node properties (e.g., target table name or target table base path) are changed after a successful pipeline build, DT will not treat the modified table as a new table. Note: If a user needs to update the target node properties, they must delete the existing target node and configure a new one in the pipeline editor.
  • For TPT jobs running on shared cluster, it is user's responsibility to install TPT otherwise job will not work due to limitation from databricks.
  • In a non-Unity Catalog environment, execution type Databricks SQL is only supported in DBFS storage.
  • Target tables used in the SQL pipelines without a create table query, will not be available in the data models for use.
  • Spark execution type does not support SQL pipelines.
  • Jobs on the Databricks Unity environment fail with "Error code: FILE_NOT_FOUND_FAILURE." Refer here.
  • In Pipelines using Snowflake Metadata Sync sources, case sensitive column names are not supported.

For Kubernetes-based installation, refer to auto$.

For more information, contact support@infoworks.io.

Upgrade

For upgrading Azure Kubernetes, refer to Upgrading Infoworks from 6.1.2.x to 6.1.3 for Azure Kubernetes

PAM

Please refer to Product Availability Matrix (PAM)