If the interactive service times out or an error occurs on the Dataproc environment, perform the following:
/opt/infoworks/bin/stop.sh dt
/opt/infoworks/bin/start.sh dt
If data overwrite fails for the delta storage format, set the following advanced configuration to overwrite the schema for the delta storage format:
Key: dt_spark_df_writer_additional_options
Value: overwriteSchema=true
Sometimes, when delta format is used for target node, the pipeline build fails on persistent clusters for Azure Databricks environment. You will receive the following error message "org.apache.spark.sql.AnalysisException: Table or view not found
". If you encounter this issue, try restarting the persistent cluster, if it still does not work, build the pipeline again with ephemeral cluster.