Date of Release: February 2026
| |
|---|
This section consists of the new features and enhancements introduced in this release.
|
Databricks 16 LTS Support: The product now supports Azure Databricks 16 LTS.
Unity Catalog–Based Staging for Snowflake : Staging area configuration is now available when Unity Catalog support is enabled, allowing users to define catalog, schema, and volume for staging.
- NOTE: Updating Snowflake with the Unity Staging Area will terminate all running clusters.
File Archival and Purge for Source Connectors: File archival and purge capabilities are now supported for Structured Files, JSON, and Fixed-width Structured Files source connectors, enabling better management of ingested files.
GPG/OpenPGP Support for CSV Ingestion: Support added for GPG/OpenPGP encryption, enabling secure ingestion of encrypted CSV files with in-memory decryption.
Resolved Issues
This section consists of the resolved issues in this release:
| JIRA ID | Issue |
|---|
| IPD-28960 | Remove SAMLTOKEN from the URL |
| IPD-28974 | Infoworks Pipeline issue giving Secret NULL value |
| IPD-28983 | Workflow runs under Admin > Manage workflows fails to load in Prod |
| IPD-28811 | User Source deletion Issue |
| IPD-28398 | Enhancement: Prioritization of workflows |
Known Issues
- When “Archive Files with Error Records” is disabled, archiving skips the entire source directory rather than only the files with error records.
- RDBMS custom target is not supported when Sync Table Schema is enabled.
- Provision throughput must be explicitly set for custom Cosmos DB and Cosmos DB (MongoDB) Sync to Target connections.
- A data mismatch was observed in pipeline builds using a Derive node with a Snowflake target. The issue occurs because the column order changes when a Derive node is added after the initial run.
NOTE
This is due to the default behavior of the Spark–Snowflake connector (external to Infoworks), which uses column order during writes. To avoid this issue, set the following advanced configuration: key:
dt_spark_snowflake_extra_options, value: column_mapping=name
- Ingestion job fails for Vertica sources when a String column is used as the Split-By column or when the Split-By column is derived using the MOD() function.
- SQL pipeline builds on Snowflake environments, using CREATE and INSERT table statements are marked as failed after SQL query execution.
- Pipeline builds with Confluent source tables using the “Read from the merged/deduplicated table” option are not executing successfully.
- Confluent streaming ingestion jobs using the AVRO message format are not completing successfully.
- Confluent ingestion jobs using the Protobuf message format may encounter errors during execution.
- Audit updates for sensitive entities such as S3 access/secret keys and WASB account keys are not captured.
- Incorrect preview data is observed when the filter condition is modified via the pipeline version parameter.
- Sync to Target and pipeline jobs may fail with SQL Server targets when the table name contains a reserved keyword.
- Sync to Target and pipeline build jobs with Snowflake targets may fail when Enable Schema Sync is enabled if columns are added or deleted after the first job run.
- In the onboarding flow, the target table name accepts duplicates when the case does not match.
- Ingestion fails in the Unity Catalog environment when the target catalog name contains a '-' character.
- In preview data, SQL queries are ran by the previously selected profile.
- The target table in a SQL pipeline created through SQL import is unavailable as a reference table.
- Insert Overwrite mode is not supported for spark native targets in transformation pipelines.
- The SCD2 merge audit columns are not being updated correctly for referenced tables in Snowflake and Datalake environments.
- Segment load jobs are reporting incorrect row counts during ingestion in the Snowflake environment.
- Preview Data in custom target node in a pipeline is not supported.
- Streaming jobs that have been stopped may show running state for the cluster job, users can verify that the job is actually stopped by observing that the number of batches run for that job does not increase after stopping it; more details here.
- Micro batches processing stop for streaming ingestion if the source stops streaming the data for Databricks runtime version 14.3.
- Pipeline build failing when read from merged/deduplicated table is selected.
Limitations
- The sorting key field is not available in table configuration for Confluent sources onboarded in Snowflake environments.
- After upgrading to version 6.2.0, failed workflow runs from earlier releases cannot be restarted due to the introduction of versioning support in workflows.
- Workflow runs paused in a previous version are marked as failed when resumed after upgrade, even if all jobs within the tasks completed successfully.
- Writing to the Iceberg uniform format is not supported when the table is partitioned by a TIMESTAMP column, as Iceberg does not support partitioning on raw timestamp fields during Delta-to-Iceberg conversion. Since the write flow involves writing to Delta first and then performing a reorg to Iceberg, any TIMESTAMP-based partitioning will cause the conversion to fail.
- Sync to target as Azure Cosmos DB or setting Azure Cosmos DB as a pipeline target is not supported in Azure Databricks environments with Unity Catalog enabled when using a shared mode cluster. This is a limitation of the Azure Cosmos DB connector.
- Pipeline node preview data is not supported when using clusters in Shared mode with Unity Catalog.
- Discrepancy while performing 'EDIT' configuration for 'Authentication Type' for a persistent cluster : The user is unable to change the authentication type for an existing cluster. Note: The cluster creator cannot be changed once created. Updating authentication details with different user credentials will not affect the creator. To change the creator, the cluster must be deleted from the Databricks console and recreated from Infoworks.
- When the table is ingested using Databricks 14.3 and in the pipeline when we check for preview data either with 11.3 or 14.3, API gets timed out.
- Streaming is not supported on a shared cluster.
- If the target node properties (e.g., target table name or target table base path) are changed after a successful pipeline build, DT will not treat the modified table as a new table. Note: If a user needs to update the target node properties, they must delete the existing target node and configure a new one in the pipeline editor.
- For TPT jobs running on shared cluster, it is user's responsibility to install TPT otherwise job will not work due to limitation from databricks.
- In a non-Unity Catalog environment, execution type Databricks SQL is only supported in DBFS storage.
- Target tables used in the SQL pipelines without a create table query, will not be available in the data models for use.
- Spark execution type does not support SQL pipelines.
- Jobs on the Databricks Unity environment fail with "Error code: FILE_NOT_FOUND_FAILURE." Refer here.
Installation
For a Kubernetes-based installation, refer to Infoworks Installation on Azure Kubernetes Service (AKS).
Upgrade
For upgrading Azure Kubernetes, refer to Upgrading Infoworks from 6.2.1.x to 6.2.2 for Azure Kubernetes.
PAM
Please refer to the Product Availability Matrix (PAM).
For more information, contact support@infoworks.io.