cancel
Showing results for 
Search instead for 
Did you mean: 
Reply
Advocate II
Advocate II

Cannot refresh or save dataflow - Cannot acquire lock for model

Hi all,

Just coming back from holidays (Happy new year to you all!) and various dataflows have sttoped working while I was away which is not good...

I try to refresh on schedule or manually and they don't refresh. There isnt an obvious error since all error files say all entities refreshed correctly, while the dataflow failed to refresh, giving no error message at all.

If I try to edit the dataflow and then save, a window like the following appears:

 

DataflowSaveError.PNGAny ideas on how to fix this?

 

Thanks,

2 ACCEPTED SOLUTIONS

@NAOS I'm experiencing something similar with one particular dataflow that has two entities connecting to an Azure SQL Database without using a gateway. It started failing on Thursday 02/01. The dataflow refresh status shows as "Failed" but the Entity refresh status for both entities shows as "Completed". There is no error message in the "Error" column. It fails on the scheduled refresh and a manual refresh.

 

When I try to modify the dataflow and save the changes I get the "cannot acquire lock for model" error message.

 

If I export the JSON and create a new replica dataflow in the same workspace by importing the JSON it works fine.

 

I will be interested to know if you find a solution!

View solution in original post

@NAOSRather than restart the whole capacity (which we would need to be done out of hours to minimise disruption) I went into Admin Portal >> Capacity Settings and under Workloads >> Dataflows increased the Max Memory by 1%, applied the change and then changed it back to its original setting and applied the change again. This only took a few seconds. The theory was that this would force the dataflow workload to restart which might clear the problem.

 

It appears to have worked as I am now able to refresh the dataflow ok, so this might be something worth suggesting to your IT Department as being less disruptive than restarting the whole capacity.

View solution in original post

11 REPLIES 11
Community Support
Community Support

Hi @NAOS ,

 

You could refer to this solved case.

https://community.powerbi.com/t5/Service/Can-t-save-dataflow-cannot-acquire-lock-for-model/m-p/81857...

Restart your premium capacity and check if it works.

 

Community Support Team _ Eads
If this post helps, then please consider Accept it as the solution to help the other members find it.

Hi @v-eachen-msft ,

 

Thanks for your answer. Unfortunately, that doesn't work for me. The users in that thread actually mention that even restarting the premium capacity several times it wouldn't fix that bug.

If you check the answer given to that post, you'll notice it isn't actually the solution nor it fixes the problem.

 

I've raised a ticket to support and will wait to see what they can add to it.

 

If you could in the meantime try to find out more about the issue with the team it would be much appreciated since more than a few of our dataflows are presenting this problem...

 

Thanks

Memorable Member
Memorable Member

@NAOS 

Thanks for the confirmation that you already submitted a support ticket.

Can you please share the support ticket for quick traction on it.

 

If you have any queries, please let us know.

 

If this post helps, then please consider Accept it as the solution to help the other members find it more
If this post was helpful may I ask you to mark it as solution and click on thumb symbol?

 

Best Regards,

Venal.

Hi @venal !

Thanks for your comments. The support request number is 120010222001657. I was advice to check that "Enhanced Dataflows Compute Engine" was active (which it was) and to reset the premium capacity (something our IT department still has to do yet).

 

I'll post to this thread if I have any updates or something changes.

 

Thanks,

 

NAOS

@NAOS I'm experiencing something similar with one particular dataflow that has two entities connecting to an Azure SQL Database without using a gateway. It started failing on Thursday 02/01. The dataflow refresh status shows as "Failed" but the Entity refresh status for both entities shows as "Completed". There is no error message in the "Error" column. It fails on the scheduled refresh and a manual refresh.

 

When I try to modify the dataflow and save the changes I get the "cannot acquire lock for model" error message.

 

If I export the JSON and create a new replica dataflow in the same workspace by importing the JSON it works fine.

 

I will be interested to know if you find a solution!

View solution in original post

@PaulKn brilliant! I'm not sure why I hand't tried that yet but it worked perfectly.

 

These dataflows that aren't working for me query data from different sources and some use gateway and others don't, but nothing really seems to show a pattern. They just suddently stopped working after months without issues or changes to the queries (nor big amounts of data were added either).

 

Anyway, lessons learned and now, although not the perfect solution, I will add a parameter to set the dataflow ID in my queries so in the future, if it happens again, I just create a new dataflow and add its Id to the parameter to adjust the queries.

 

Many thanks!

NAOS

@NAOSI didn't really consider creating a new dataflow to be a solution, more a troubleshooting step to rule out any connection issues etc. Whilst we do sometimes use parameters in reports to hold the workspace and dataflow id (so that we can easily switch between a dev and production version of a dataflow), if you have a large number of reports created by users of mixed skillsets like we have then changing the id on all of them isn't really practical. 

 

Did your IT Department try restarting the capacity? We will try that tonight, but in the meantime I guess I will log a case as well.

Hi @PaulKn ,

We haven't yet restarted our capacity (IT dep. still hasn't done it...), but I'll post here once we do to let you know if that fixed the issue or not.

Kind regards,

 

NAOS

 

 

@NAOSRather than restart the whole capacity (which we would need to be done out of hours to minimise disruption) I went into Admin Portal >> Capacity Settings and under Workloads >> Dataflows increased the Max Memory by 1%, applied the change and then changed it back to its original setting and applied the change again. This only took a few seconds. The theory was that this would force the dataflow workload to restart which might clear the problem.

 

It appears to have worked as I am now able to refresh the dataflow ok, so this might be something worth suggesting to your IT Department as being less disruptive than restarting the whole capacity.

View solution in original post

@PaulKn , @venal , @v-eachen-msft 

That worked! I'll try change the last comment to be the solution.

Thanks PaulKn! Great work.

 

NAOS

Super User IV
Super User IV

You could check the Issues forum here:

https://community.powerbi.com/t5/Issues/idb-p/Issues

And if it is not there, then you could post it.

If you have Pro account you could try to open a support ticket. If you have a Pro account it is free. Go to https://support.powerbi.com. Scroll down and click "CREATE SUPPORT TICKET".


---------------------------------------

@ me in replies or I'll lose your thread!!!

I have a NEW book! 
DAX Cookbook from Packt
Over 120 DAX Recipes!




Did I answer your question? Mark my post as a solution!

Proud to be a Super User!




Helpful resources

Announcements
secondImage

Happy New Year from Power BI

This is a must watch for a message from Power BI!

December Update

Check it Out!

Click here to read more about the December 2020 Updates!

Community Blog

Check it Out!

Click here to read the latest blog and learn more about contributing to the Power BI blog!

Get Ready for Power BI Dev Camp

Get Ready for Power BI Dev Camp

Mark your calendars and join us for our next Power BI Dev Camp!.

Top Solution Authors
Top Kudoed Authors