Failed to save a table to MS Azure Databricks

HI all,
I got the following error in one of Kedro nodes when the table was saved to MS Azure Databricks.

py4j.protocol.Py4JJavaError: An error occurred while calling o1010.saveAsTable.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:230)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:178)
at org.apache.spark.sql.execution.datasources.DataSource.writeAndRead(DataSource.scala:575)
at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.saveDataIntoTable(createDataSourceTables.scala:218)
at org.apache.spark.sql.execution.command.CreateDataSourceTableAsSelectCommand.run(createDataSourceTables.scala:176)
…(omitted the similar messages)…

The source code to return this error is

return table1

When I specified the other table name in the same node like

return table2

then it worked. However, the particular tables made in one node cannot be outputted.

  • Kedro version : 0.16.6
  • Python version: 3.7.9

Has anyone faced the same issue?
Would appreciate if someone can share the resolution.

Thank you.
@Minyus @waylonwalker