Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.

Reply
M001
Helper I
Helper I

Laptop is running very slow with not very sophisticated models. How could I improve it?

Hi all,

 

Unfortunately my laptop model doesnt allow for adding more RAM and I am exploring with Virtual Desktop with better spec.

 

Below are my current laptop specs and it's very slow for running mid-level project:

 

Processor: 12th Gen Intel(R) Core(TM) i5-1245U, 1600 Mhz, 10 Core(s), 12 Logical Processor(s)
Installed RAM: 16.0 GB (15.7 GB usable)

  • Installed Physical Memory (RAM): 16.0 GB
  • Total Physical Memory: 15.7 GB
    Available Physical Memory: 5.75 GB
  • Total Virtual Memory: 31.7 GB
    Available Virtual Memory: 20.2 GB


System Type: 64-bit operating system, x64-based processor
OS: Windows 10 Enterprise

System Model Latitude 5330

 

 

Hope someone could enlighten me how to improve the processing time, thank you!!

 

1 ACCEPTED SOLUTION

 

A lot depends on your source system (relational/non-relational) and source data structure. If your source system is a relational data source, then you don't really need to limit anything as long as you maintain query folding.

 

What you *should* use PQ for: Anything that changes the data structure into the most efficient (normalised + star schema) format for the VertiPaq engine and SSAS to work with.

As such, there's some operations you can knock off straight away, and some that are unavoidable, for example:

 

-- Use for: Filtering data to only required rows.

-- Use for: Splitting out dimension tables from fact tables for normalisation/star schema.

-- Use for: Combining columns for multi-column relationships.

-- Sorting: This is almost never necessary to do in PQ (except maybe self-joins that require ordered indexes etc.).

-- Merges: In most cases can be replaced by relationships.

-- Pivot/Unpivot: Only use if the source data is denormalised. Never use to move the data away from a normalised structure to a 'display' structure - matrix visual can do this for you.

-- Group By: Only use if data is extremely large (millions/billions of rows) and requires aggregation, or if required to normalise the data structure.

 

There's obviously many many more use-case examples that I just can't cover, but the overarching principle is, as above: Use PQ only to normalise your structure and/or prepare your tables for a star schema model.

 

Pete



Now accepting Kudos! If my post helped you, why not give it a thumbs-up?

Proud to be a Datanaut!




View solution in original post

6 REPLIES 6
M001
Helper I
Helper I

Hi Pete,

 

After weeks of trying, I found out the root cause is coming from table merge and sum of columns. I manage to speed up the processing time by using Power Pivot to do all tables merging with relationship, however, the laptop I am using now is still very slow (cannot afford for having more than 2 tables merge in Power Query) and in fact I have tried using my desktop with GPU and better RAM able to complete much complicated model with a speed of light

 

 

Regrads,

M

 

Merge is a 'whole-table' operation i.e. it requires Power Query to put the entire table into memory to perform, while also performing multiple scans of the second table to find matching rows.

Generally, the best way to speed up merges is to not do them at all - relate the two tables in your model instead. If you're merging on more than one column at a time, then just combine these columns into a single column in Power Query in both tables so you can still relate them in the data model.

 

There are a number of other 'whole-table' operations that should either be minimised or avoided entirely for the same reason as above when using non-relational sources, such as Pivot/Unpivot, Sort, Group By etc.

When using relational sources such as SQL Server, this issue can be negated by maintaining query folding throughout your query, so these memory-intensive operations are performed by the source instead.

 

Additionally, performing calculations within Power Query is often unnecessary and not particularly efficient. Using DAX measures within your model is orders of magnitude faster and more efficient than calculating things in PQ.

 

Pete



Now accepting Kudos! If my post helped you, why not give it a thumbs-up?

Proud to be a Datanaut!




Hi Pete, 

 

Thank you so much for your quick response! Please allow me to ask this, if we have to minimize Pivot/Unpivot, Sort, Group By and also eliminate calculations in PQ, what else should we use PQ ideally for? I find these are the most helpful features in PQ

 

 

Thanks again,

 

A lot depends on your source system (relational/non-relational) and source data structure. If your source system is a relational data source, then you don't really need to limit anything as long as you maintain query folding.

 

What you *should* use PQ for: Anything that changes the data structure into the most efficient (normalised + star schema) format for the VertiPaq engine and SSAS to work with.

As such, there's some operations you can knock off straight away, and some that are unavoidable, for example:

 

-- Use for: Filtering data to only required rows.

-- Use for: Splitting out dimension tables from fact tables for normalisation/star schema.

-- Use for: Combining columns for multi-column relationships.

-- Sorting: This is almost never necessary to do in PQ (except maybe self-joins that require ordered indexes etc.).

-- Merges: In most cases can be replaced by relationships.

-- Pivot/Unpivot: Only use if the source data is denormalised. Never use to move the data away from a normalised structure to a 'display' structure - matrix visual can do this for you.

-- Group By: Only use if data is extremely large (millions/billions of rows) and requires aggregation, or if required to normalise the data structure.

 

There's obviously many many more use-case examples that I just can't cover, but the overarching principle is, as above: Use PQ only to normalise your structure and/or prepare your tables for a star schema model.

 

Pete



Now accepting Kudos! If my post helped you, why not give it a thumbs-up?

Proud to be a Datanaut!




Thank you, Pete! I've learned a lot from you in just a single thread, appreciate your help so much!!

BA_Pete
Super User
Super User

Hi @M001 ,

 

Which part of the process is causing you trouble? Is it the Power Query transformations and load to the Data Model, or the display and responsiveness of visuals?

 

Adding RAM would be a last resort even if you were able to, so the first thing we need to look at is optimising the area that is causing you the most issues. Can you provide more information about this please? I.e. if your queries are taking a long time to load, please provide the full M code from a typical query that we can optimise. If your report is slow and unresponsive, please provide details around your data connection method (Import, Direct Query etc.), how many visuals you have on a page, which types of visuals these are, the quantity of data each visual is displaying etc.

 

Pete



Now accepting Kudos! If my post helped you, why not give it a thumbs-up?

Proud to be a Datanaut!




Helpful resources

Announcements
LearnSurvey

Fabric certifications survey

Certification feedback opportunity for the community.

PBI_APRIL_CAROUSEL1

Power BI Monthly Update - April 2024

Check out the April 2024 Power BI update to learn about new features.

April Fabric Community Update

Fabric Community Update - April 2024

Find out what's new and trending in the Fabric Community.

Top Solution Authors
Top Kudoed Authors