Infoworks Release Notes
Release Notes

v6.1.1

Date of Release: December 2024

This section consists of the new features and enhancements introduced in this release.

  • Serverless DBSQL and SQLAPI submission with Photon - Support added for Databricks server-less execution type in pipelines with support for Photon, the built-in vectorized query engine on Databricks which makes SQL and DataFrame API calls faster and reduces total cost per workload. More details here.
  • Unity Catalog/Databricks security enhancements - The user now has the option to provide a staging catalog and staging schema, which will be used to create tables other than the target table (history, error, merged, CDC, and segment tables).
  • SQL pushdown enhancements - Users can now trigger SQL pushdown pipelines on the Databricks environment by configuring Databricks SQL warehouses. Details can be found here.
  • Workflows usability improvements - The ability to search for nodes within workflows, the utilization of the active version of the pipeline for executions, and navigation directly to the parent workflow run have been incorporated. You can refer to the details here.
  • Added ability to support a custom workflow run on the Workflow Build page. Users can now skip specific tasks from the workflow run.

Resolved Issues

This section consists of the resolved issues in this release:

JIRA IDIssue
IPD-27086RESTAPI doesn't propagate additional API parameters to the next URL
IPD-27004Two workflows pending due to log rotation
IPD-27050Orchestrator Pod in CrashLoopBackOff Status after Upgrade
IPD-27114In SQL-VIS Conversion, semicolon in the SQL node pipeline is not supported
IPD-27094Unable to retrieve last login for the users
IPD-27113Newly added columns are not getting added at the end resulting in column order issue in v6.1.0
IPD-27318Source_schema_name and Source_table_name are interchanged in the ingestion metrics response in v6.1.0

Known Issues

  • Segmented load parallel segment ingestion failing with history enabled.
  • Target table in SQL pipeline created through SQL import is not available as a reference table.
  • Insert Overwrite mode is not supported in transformation pipelines.
  • Non-Delta and non-Unity storage formats encounter issues in the Unity Catalog environment, specifically when using insert overwrite mode.
  • CDC Insert Overwrite ingestion jobs with derived split by columns with Query as a table is failing.
  • Insert Overwrite ingestion jobs with 'Query as a tables' are failing when the column names have space in it.
  • Ingestion for tables created using query as a table with incremental mode insert overwrite will fail when the column names have space in it for a CDC job.
  • Ingestion for tables created using query as a table with incremental mode insert overwrite and derived splitby configured will fail for a CDC job.
  • ERROR message is noticed while opening preview data in custom target node in a pipeline.
  • Streaming jobs that have been stopped may show running state for the cluster job, users can verify that the job is actually stopped by observing that the number of batches run for that job does not increase after stopping it; more details here.
  • Micro batches processing stop for streaming ingestion if the source stops streaming the data for Databricks runtime version 14.3.
  • Pipeline build failing when read from merged/deduplicated table is selected.
  • Pipeline node preview data request times out for initial few tries.
  • Error message is noticed when attempting to delete ACT CRM source.

Limitations

  • CDC SCD2 pipeline build are failing intermittently on 6.1.0 unity environment.
  • When the table is ingested using Databricks 14.3 and in the pipeline when we check for preview data either with 11.3 or 14.3, API gets timed out.
  • Streaming is not supported on a shared cluster.
  • For TPT jobs running on shared cluster, it is user's responsibility to install TPT otherwise job will not work due to limitation from databricks.
  • Switching the pipeline build engine between DBSQL and Spark fails due to a schema mismatch error when DECIMAL columns are in the data.
  • In a non-Unity Catalog environment, execution type Databricks SQL is only supported in DBFS storage.
  • Node SQL preview for pipelines configured to run on DBSQL execution engine, may not be accurate. The correct query will be available on target SQL preview.
  • Target tables used in the SQL pipelines without a create table query, will not be available in the data models for use.
  • Spark execution type does not support SQL pipelines.

For Kubernetes-based installation, refer to Infoworks Installation on Azure Kubernetes Service (AKS).

For more information, contact support@infoworks.io.

Upgrade

For upgrading Azure Kubernetes, refer to Upgrading Infoworks from 6.1.0.x to 6.1.1 for Azure Kubernetes.

PAM

The Product Availability Matrix (PAM) is available here.

On This Page
v6.1.1