Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.

Reply
newpbiuser01
Helper IV
Helper IV

Data Refresh Times and Duplicate/Referenced Tables

Hello,

 

I have a question around referencing vs. duplicating data tables. I am working with large data tables (>20 million rows) and in order to increase the efficiency of the reports, I break down the master data table into dimension tables (for dates and locations etc.). To create these dimension tables, I have two options - either duplicate the master data table or reference it, remove all in-applicable columns and remove duplicate and use that as the dimention table. 

 

When I duplicated the table, and tried the data refresh on the report in Power BI Service, I got time-out errors because the data refresh was taking too long. I changed my query to instead create dimension tables by referencing the master data table.

newpbiuser01_0-1701461247318.png

 

Although the data refresh time has gone down considerably when I reference the master data table to create the dimension tables, it appears that Power BI when refreshing the dimension tables, doesn't actually just hit the main data source once, and then refresh the dimension tables, it will hit the main data source, get the master table, update dimension table 1, then hit it again, update the master data table and then update dimension table 2. 

 

What I'm struggling to understand is, why is the data refresh time in Service considerably less (went down from over 5 hours to 45 minutes) when the data is being refreshed and loaded from the main data source the same number of times as it would have if I had duplicated the data table. Also, which one do we use for creating dimension tables? 

 

Any help or insight would be greatly appreciated. 

 

Thank you!

1 ACCEPTED SOLUTION
lbendlin
Super User
Super User

20M rows is not considered large (unless you have gazillions of columns)

 

Consider using Table.Buffer  if you keep re-using a source table in your query.  (Doesn't help across queries)

 

Use Query Diagnostics to know for sure where your mashup is spending its time, or use the SQL Profiler option  

Chris Webb's BI Blog: Analysing Dataset Refresh In Power BI Premium Using SQL Server Profiler (cross...

 

View solution in original post

1 REPLY 1
lbendlin
Super User
Super User

20M rows is not considered large (unless you have gazillions of columns)

 

Consider using Table.Buffer  if you keep re-using a source table in your query.  (Doesn't help across queries)

 

Use Query Diagnostics to know for sure where your mashup is spending its time, or use the SQL Profiler option  

Chris Webb's BI Blog: Analysing Dataset Refresh In Power BI Premium Using SQL Server Profiler (cross...

 

Helpful resources

Announcements
LearnSurvey

Fabric certifications survey

Certification feedback opportunity for the community.

PBI_APRIL_CAROUSEL1

Power BI Monthly Update - April 2024

Check out the April 2024 Power BI update to learn about new features.

April Fabric Community Update

Fabric Community Update - April 2024

Find out what's new and trending in the Fabric Community.

Top Solution Authors
Top Kudoed Authors