Earn the coveted Fabric Analytics Engineer certification. 100% off your exam for a limited time only!
Hi,
for one of our clients we have a data model (about 350 MB) refreshing without problems until beginning of this week. From then, refreshes fail because of "memory allocation failure". We simplified some queries and removed columns what improved the refreshing performance in desktop and also worked in the dev-workspace. But, in the same tenant but other workspace, the problem remains also with the optimized version of the data model. The model continues to fail during refreshes because of "memory allocation failure".
I'm a bit confused because of the successful refreshes in the other workspace. Do you have experienced this behaviour or do you have any ideas for possible reasons for that behaviour?
thanks a lot
Hi, @HSHH
This looks like an issue with one or more calculated columns. Usually you should be able to resolve the issue by removing it or simplifying the formula backing the column.
Did you directly replace the model on the service to update it? Perhaps you need to completely delete the original model in the workspace before uploading new model.
If the problem is still not resolved, it is recommended to open a support ticket to let engineers look into the issue on your side.
Best Regards,
Community Support Team _ Eason
Is this on a shared capacity or a dedicated P SKU?
It's a shared capacity