apache spark - Azure Synapse pipeline with dataflows failing randomly - Stack Overflow

I am having issues with a series of pipelines that build our data platform Spark databases hosted in Az

I am having issues with a series of pipelines that build our data platform Spark databases hosted in Azure Synapse.

The pipelines host dataflows which have 'recreate table' enabled. The dataflows extract data and are supposed to recreate the tables each time the pipelines run. There is a step at the start of the job to drop all the tables as well. However the jobs randomly fail at different stages of the jobs with errors that look like the one below (sensitive system details have been removed):

Operation on target failed: {"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'sinkname': Spark job failed in one of the cluster nodes while writing data in one of the partitions to sink, with following error message: Failed to rename VersionedFileStatus{VersionedFileStatus{path=abfss://synapsename.dfs.core.windows/synapse/workspaces/synapsename/warehouse/databasename.db/tablename/.name removed/_temporary/0/_temporary/idremoved/part-idremoved.snappy.parquet; isDirectory=false; length=636844; replication=1; blocksize=268435456; modification_time=1731778904698; access_time=0; owner=81aba2ef-674d-4bcb-a036-f4ab2ad78d3e; group=trusted-service-user; permission=rw-r-----; isSymlink=false; hasAcl=true; isEncrypted=false; isErasureCoded=false}; version='0x8DD0665F02661DC'} to abfss://[email protected]/synapse/workspaces/synapsename/warehouse/dataplatform","Details":null}

This might occur at any Spark database table loads randomly or might not occur at all the next day and might reoccur again in a few days.

To fix this, we go to the Synapse backend data lake storage and manually delete the Spark database table (parquet file) and rerun the job and then it succeeds. Tried increasing the resources including the spark run time.

Any thoughts, anyone?

I am having issues with a series of pipelines that build our data platform Spark databases hosted in Azure Synapse.

The pipelines host dataflows which have 'recreate table' enabled. The dataflows extract data and are supposed to recreate the tables each time the pipelines run. There is a step at the start of the job to drop all the tables as well. However the jobs randomly fail at different stages of the jobs with errors that look like the one below (sensitive system details have been removed):

Operation on target failed: {"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'sinkname': Spark job failed in one of the cluster nodes while writing data in one of the partitions to sink, with following error message: Failed to rename VersionedFileStatus{VersionedFileStatus{path=abfss://synapsename.dfs.core.windows/synapse/workspaces/synapsename/warehouse/databasename.db/tablename/.name removed/_temporary/0/_temporary/idremoved/part-idremoved.snappy.parquet; isDirectory=false; length=636844; replication=1; blocksize=268435456; modification_time=1731778904698; access_time=0; owner=81aba2ef-674d-4bcb-a036-f4ab2ad78d3e; group=trusted-service-user; permission=rw-r-----; isSymlink=false; hasAcl=true; isEncrypted=false; isErasureCoded=false}; version='0x8DD0665F02661DC'} to abfss://[email protected]/synapse/workspaces/synapsename/warehouse/dataplatform","Details":null}

This might occur at any Spark database table loads randomly or might not occur at all the next day and might reoccur again in a few days.

To fix this, we go to the Synapse backend data lake storage and manually delete the Spark database table (parquet file) and rerun the job and then it succeeds. Tried increasing the resources including the spark run time.

Any thoughts, anyone?

Share Improve this question asked Nov 16, 2024 at 21:15 NITHIN BNITHIN B 3341 silver badge11 bronze badges 1
  • Update: The MS team gave an update that this is an issue with their blob storage and is looking into it. Seems to be a known issue. Has anyone encountered this issue ? – NITHIN B Commented Jan 9 at 9:15
Add a comment  | 

1 Answer 1

Reset to default 0

Set the concurrency to 1.Typically it the _temporary file.

发布者:admin,转转请注明出处:http://www.yc00.com/questions/1745647969a4638085.html

相关推荐

发表回复

评论列表(0条)

  • 暂无评论

联系我们

400-800-8888

在线咨询: QQ交谈

邮件:admin@example.com

工作时间:周一至周五,9:30-18:30,节假日休息

关注微信