Date of Release: October 2025
| |
|---|
This section consists of the new features and enhancements introduced in this release.
|
- Secrets Admin Role: A new Secrets Admin role has been introduced for managing secrets with controlled dashboard access.
- PostgresDB 15.x Support Introduced: PostgresDB 15.x is now supported. As PostgresDB 13.x reaches end-of-life (EOL) in November 2025, users are advised to follow the Steps for PostgreSQL Upgrade.
- Default Concurrent Run Configuration: Admins can now define a default concurrent run limit for workflows through configuration settings.
Resolved Issues
This section consists of the resolved issues in this release:
| JIRA ID | Issue |
|---|
| IPD-28898 | Pipeline group is failing with a “no current database” error because the domain-level pre-execute query (USE <DB_NAME>) is not being applied. |
| IPD-28899 | Sample data is not being populated for source ingestion |
| IPD-28994 | Incremental Run for Vertica source failing |
| IPD-28962 | Infoworks AKS Cluster Migration to Network-Isolated Clusters |
| IPD-28993 | Orchestrator Worker Stale/Down Issue |
Known Issues
- Ingestion job fails for Vertica sources when a String column is used as the Split-By column or when the Split-By column is derived using the MOD() function.
- SQL pipeline builds on Snowflake environments, using CREATE and INSERT table statements are marked as failed after SQL query execution.
- Pipeline builds with Confluent source tables using the “Read from the merged/deduplicated table” option are not executing successfully.
- Confluent streaming ingestion jobs using the AVRO message format are not completing successfully.
- Confluent ingestion jobs using the Protobuf message format may encounter errors during execution.
- Audit updates for sensitive entities such as S3 access/secret keys and WASB account keys are not captured.
- Incorrect preview data is observed when the filter condition is modified via the pipeline version parameter.
- Sync to Target and pipeline jobs may fail with SQL Server targets when the table name contains a reserved keyword.
- Sync to Target and pipeline build jobs with Snowflake targets may fail when Enable Schema Sync is enabled if columns are added or deleted after the first job run.
- In the onboarding flow, the target table name accepts duplicates when the case does not match.
- Ingestion fails in the Unity Catalog environment when the target catalog name contains a '-' character.
- In preview data, SQL queries are ran by the previously selected profile.
- The target table in a SQL pipeline created through SQL import is unavailable as a reference table.
- Insert Overwrite mode is not supported for spark native targets in transformation pipelines.
- The SCD2 merge audit columns are not being updated correctly for referenced tables in Snowflake and Datalake environments.
- Segment load jobs are reporting incorrect row counts during ingestion in the Snowflake environment.
- Preview Data in custom target node in a pipeline is not supported.
- Streaming jobs that have been stopped may show running state for the cluster job, users can verify that the job is actually stopped by observing that the number of batches run for that job does not increase after stopping it; more details here.
- Micro batches processing stop for streaming ingestion if the source stops streaming the data for Databricks runtime version 14.3.
- Pipeline build failing when read from merged/deduplicated table is selected.
Limitations
- The sorting key field is not available in table configuration for Confluent sources onboarded in Snowflake environments.
- After upgrading to version 6.2.0, failed workflow runs from earlier releases cannot be restarted due to the introduction of versioning support in workflows.
- Workflow runs paused in a previous version are marked as failed when resumed after upgrade, even if all jobs within the tasks completed successfully.
- Writing to the Iceberg uniform format is not supported when the table is partitioned by a TIMESTAMP column, as Iceberg does not support partitioning on raw timestamp fields during Delta-to-Iceberg conversion. Since the write flow involves writing to Delta first and then performing a reorg to Iceberg, any TIMESTAMP-based partitioning will cause the conversion to fail.
- Sync to target as Azure Cosmos DB or setting Azure Cosmos DB as a pipeline target is not supported in Azure Databricks environments with Unity Catalog enabled when using a shared mode cluster. This is a limitation of the Azure Cosmos DB connector.
- Pipeline node preview data is not supported when using clusters in Shared mode with Unity Catalog.
- Discrepancy while performing 'EDIT' configuration for 'Authentication Type' for a persistent cluster : The user is unable to change the authentication type for an existing cluster. Note: The cluster creator cannot be changed once created. Updating authentication details with different user credentials will not affect the creator. To change the creator, the cluster must be deleted from the Databricks console and recreated from Infoworks.
- When the table is ingested using Databricks 14.3 and in the pipeline when we check for preview data either with 11.3 or 14.3, API gets timed out.
- Streaming is not supported on a shared cluster.
- If the target node properties (e.g., target table name or target table base path) are changed after a successful pipeline build, DT will not treat the modified table as a new table. Note: If a user needs to update the target node properties, they must delete the existing target node and configure a new one in the pipeline editor.
- For TPT jobs running on shared cluster, it is user's responsibility to install TPT otherwise job will not work due to limitation from databricks.
- In a non-Unity Catalog environment, execution type Databricks SQL is only supported in DBFS storage.
- Target tables used in the SQL pipelines without a create table query, will not be available in the data models for use.
- Spark execution type does not support SQL pipelines.
- Jobs on the Databricks Unity environment fail with "Error code: FILE_NOT_FOUND_FAILURE." Refer here.
For a Kubernetes-based installation, refer to Infoworks Installation on Azure Kubernetes Service (AKS).
For more information, contact support@infoworks.io.
Upgrade
For upgrading Azure Kubernetes, refer to Upgrading Infoworks from 6.2.0.x to 6.2.1 for Azure Kubernetes.
PAM
Please refer to the Product Availability Matrix (PAM).