Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more.
Get startedGrow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.
Hello,
I have defined an incremental refresh policy to refresh the last 6 months and archive the entire data (5 years) to reduce peak memory usage during a refresh. But, I don't see much difference in terms of the number of refresh failures or even the refresh duration. I checked the partitions in Analysis Services, and when a refresh ends successfully, I can see that only the partitions for the last 6 months have been updated (which aligns with my incremental refresh policy). However, when I refreshed the 2 tables with the incremental refresh policy in SSMS, I noticed that the entire dataset had been transferred. Yet, after checking the partitions, only the last 6 months had been processed.
This leads me to the impression that during the refresh, only the last 6 months are updated, but the entire dataset is processed. This could explain why I don't see much difference in terms of memory usage and refresh time.
Below, you can see that almost 9.5 million and 8.5 million rows have been transferred for these 2 tables, which represent the entire dataset (5 years of data) :
I would appreciate it if you could help me understand how this works.
Thanks!
Solved! Go to Solution.
When refreshing multiple partition you can influence the level of parallelism. May want to tone that down a bit, or even go fully sequential.
Which refresh type did you specify?
Do you have table dependencies (like Auto Date/Time)? - Check the refresh log to see if it processes other tables when you asked for the processing of a particular table.
I did a "full" refresh, and it processed the entire data but refreshed the last 6 months partitions only.
I noticed something very strange. We have a premium embedded capacity with 2 workspaces, one for production and one for developments (excatly same configurations), and I published a semantic model into these 2 workspaces. Now I'm constantly getting memory capacity error for the production workspace, while all refreshes completed successfully in Dev workspace. I checked the workspaces settings and I can't find any differences.
I did some research but did not find anything special.
Thanks.
When refreshing multiple partition you can influence the level of parallelism. May want to tone that down a bit, or even go fully sequential.
Thank you! May I know how I can do it fully sequential? As I know, we can change the maximum parallel loading in PBI desktop, but not sure about the full sequential method.
Processing Options and Settings (Analysis Services) | Microsoft Learn
Note that I am talking about partitions, not tables.