Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Grow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.

Reply
amir_mm
Helper II
Helper II

Incremental refresh and refresh in Analysis services

Hello,

I have defined an incremental refresh policy to refresh the last 6 months and archive the entire data (5 years) to reduce peak memory usage during a refresh. But, I don't see much difference in terms of the number of refresh failures or even the refresh duration. I checked the partitions in Analysis Services, and when a refresh ends successfully, I can see that only the partitions for the last 6 months have been updated (which aligns with my incremental refresh policy). However, when I refreshed the 2 tables with the incremental refresh policy in SSMS, I noticed that the entire dataset had been transferred. Yet, after checking the partitions, only the last 6 months had been processed.

This leads me to the impression that during the refresh, only the last 6 months are updated, but the entire dataset is processed. This could explain why I don't see much difference in terms of memory usage and refresh time.

 

Below, you can see that almost 9.5 million and 8.5 million rows have been transferred for these 2 tables, which represent the entire dataset (5 years of data) :

amir_mm_2-1715371570874.png

 

amir_mm_1-1715371534479.png

 

I would appreciate it if you could help me understand how this works.

Thanks!

 

1 ACCEPTED SOLUTION

When refreshing multiple partition you can influence the level of parallelism.  May want to tone that down a bit, or even go fully sequential.

View solution in original post

5 REPLIES 5
lbendlin
Super User
Super User

Which refresh type did you specify?

Do you have table dependencies (like Auto Date/Time)?  - Check the refresh log to see if it processes other tables when you asked for the processing of a particular table.

I did a "full" refresh, and it processed the entire data but refreshed the last 6 months partitions only.

I noticed something very strange. We have a premium embedded capacity with 2 workspaces, one for production and one for developments (excatly same configurations), and I published a semantic model into these 2 workspaces. Now I'm constantly getting memory capacity error for the production workspace, while all refreshes completed successfully in Dev workspace. I checked the workspaces settings and I can't find any differences. 

I did some research but did not find anything special.

 

Thanks.

When refreshing multiple partition you can influence the level of parallelism.  May want to tone that down a bit, or even go fully sequential.

Thank you! May I know how I can do it fully sequential? As I know, we can change the maximum parallel loading in PBI desktop, but not sure about the full sequential method.

Processing Options and Settings (Analysis Services) | Microsoft Learn

 

Note that I am talking about partitions, not tables.

Helpful resources

Announcements
RTI Forums Carousel3

New forum boards available in Real-Time Intelligence.

Ask questions in Eventhouse and KQL, Eventstream, and Reflex.

MayPowerBICarousel1

Power BI Monthly Update - May 2024

Check out the May 2024 Power BI update to learn about new features.

Top Solution Authors