Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Grow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.

Reply
bexbissell
Frequent Visitor

Data Model Design

Hi folks,

 

Power BI novice at work here! I am looking for some feedback on how to design a data model / transform data to meet my reqiurements.

 

Background

Every year we produce data that is a snapshot of our industry: there are Accounts (companies), the roles they play (Administrator, Custodian, Legal Adviser etc) and the Groups they service - here is a simplified data model:
Physical Data Model.png

So, this means that from year-to-year the data can look like:

Year 1

Physical Object Model - Y1.png

Year 2

....and next year:

Physical Object Model - Y1.png


Every year we will ingest this full dataset to the model to be able to generate reports such as 'Top Administrators' for that year or 'Largest Group' but I would also like to see the evolution of a particular Account or Group over time - won business through new Groups or lost business.

 

Solution?

I have followed the 'Star Schema' pattern and consolodated imported tables to reduce the number of Fact tables (not shown):

Star Schema.png

 

Questions

1. Year-on-year I will be appending a new dataset to the Fact tables - is that a good approach, any tips here?

2. Assuming yes above there will be duplication of the Account, Service Provider and Groups records. In the Transform should I create a composite key for each record each year e.g. based upon the AccountID, ServiceProviderID and GroupID combined with the Reporting Date and use that in the Fact table relationships? - Does that make sense or is there a better way?

 

Your feedback is very much appreciated.

Many thanks

1 ACCEPTED SOLUTION

Hi @bexbissell ,

 

For the optimization of the model, I can also provide some suggestions.

If it is Direct Query connection mode: you can optimize your data model using following tips:

  • Remove unused tables or columns, where possible. 
  • Avoid distinct counts on fields with high cardinality – that is, millions of distinct values.  
  • Take steps to avoid fields with unnecessary precision and high cardinality. For example, you could split highly unique datetime values into separate columns – for example, month, year, date, and so on. Or, where possible, use rounding on high-precision fields to lower cardinality – (for example, 13.29889 -> 13.3).
  • Use integers instead of strings, where possible.
  • Be wary of DAX functions, which need to test every row in a table – for example, RANKX – in the worst case, these functions can exponentially increase run-time and memory requirements given linear increases in table size.
  • When connecting to data sources via DirectQuery, consider indexing columns that are commonly filtered or sliced again. Indexing greatly improves report responsiveness.  

 

For more model optimization guidelines, you can refer to the following document links: Optimization guide for Power BI - Power BI | Microsoft Docs


Looking forward to your feedback.

Best Regards,
Henry

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

View solution in original post

3 REPLIES 3
amitchandak
Super User
Super User

@bexbissell , the model looks good.

Numeric Key for Composite keys is good whenever needed. But if you are doing that in power bi it will add a lot of cost at the time of loading.

 

 

Thanks for the feedback on the model schema @amitchandak - I've learnt a lot about model schemas, good modelling practice and model relationships, now I'm just trying to put it into practice.

The dataset is relatively small so generating the compostite keys on load/transform does not take long (20 secs). 

Hi @bexbissell ,

 

For the optimization of the model, I can also provide some suggestions.

If it is Direct Query connection mode: you can optimize your data model using following tips:

  • Remove unused tables or columns, where possible. 
  • Avoid distinct counts on fields with high cardinality – that is, millions of distinct values.  
  • Take steps to avoid fields with unnecessary precision and high cardinality. For example, you could split highly unique datetime values into separate columns – for example, month, year, date, and so on. Or, where possible, use rounding on high-precision fields to lower cardinality – (for example, 13.29889 -> 13.3).
  • Use integers instead of strings, where possible.
  • Be wary of DAX functions, which need to test every row in a table – for example, RANKX – in the worst case, these functions can exponentially increase run-time and memory requirements given linear increases in table size.
  • When connecting to data sources via DirectQuery, consider indexing columns that are commonly filtered or sliced again. Indexing greatly improves report responsiveness.  

 

For more model optimization guidelines, you can refer to the following document links: Optimization guide for Power BI - Power BI | Microsoft Docs


Looking forward to your feedback.

Best Regards,
Henry

If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.

Helpful resources

Announcements
Europe Fabric Conference

Europe’s largest Microsoft Fabric Community Conference

Join the community in Stockholm for expert Microsoft Fabric learning including a very exciting keynote from Arun Ulag, Corporate Vice President, Azure Data.

RTI Forums Carousel3

New forum boards available in Real-Time Intelligence.

Ask questions in Eventhouse and KQL, Eventstream, and Reflex.

MayPowerBICarousel1

Power BI Monthly Update - May 2024

Check out the May 2024 Power BI update to learn about new features.