Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more.
Get startedGrow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.
I'm working with a data model with a view to optimise report performance. Model is modelled as a star schema but with 3 different many to many relationships We have bridge tables to resolve the cardinality between fact <-> bridge <-> dimension i.e using cross filtered relationships.
The bridge tables also contains observations that are relevant for the grain of the bridge table hence they are more like referenced fact tables.
The measures are split roughly 33% across fact / 2 bridge tables. No time intellgience (yet).
There is also dynamic RLS across department type table that is linked to one of the bridges.
In this scenario do you think it's more optimal to create specific explicit measures using CROSSFILTER() to change cardinality for the measures based on bridge table columns, leaving the relationship cardinality as :
i.e. fact -> bridge < - dimension
I'm thinking of experimenting to see how performance changes.
I don't think RLS works with CROSSFILTER() - this could be a constraint against using CROSSFILTER?
Users are looking to combine measures based on bridge tables with measures from fact and multiple dimensions - even crossing the bridge tables to get to dimensions. I might add, across multiple report visuals.
The M2M already adds complexity hence I'm looking to improve performance without increasing complexity - if possible.
I'm guessing there is no easy option for this other than to experiment - I'm wondering if anyone has dealt with this scenario before?
I can't use CROSSFILTER() for letting the user slice across the bridge tables hence I am boxed into a corner to some degree to use cross filtering relationships.
Hi @lee_hawthorn,
In fact, M2M mode not fully suitable to work with the power bi data model(AS tabular model). They will affect the performance and it not work same as 'm2m' relationship in other types of databases.
Relationships in analysis services tabular models
In addition, you can refer to the following blog to create multiple column relationship bridge table to reduce bridge table amount:
Relationship in Power BI with Multiple Columns
Regards,
Xiaoxin Sheng
I can't eliminate bridge tables as they have observations (columns) for the more detailed grain.
If I brought these observations into fact table I would have to change the grain of the fact table and then I would have duplicated values in the fact table
I do bring summarised values into the fact table and base measures on that but some measures have to be based on bridge table column as users can slice across related Dim-> Bridge.
I'm well aware of how Power BI works with M2M. i.e. duplicates records from m-1 leg and provides accurate sub-totals.
I suppose I was right using cross filter direction in this case.
The performance isn't bad - it gets very slow if users request data from all bridge tables and related dimensions.
All the measures are quite simple - things like SUM() COUNTROWS() Anything more complex that requires data from M2M requires huge amounts of memory +64GB and is very very slow. or for instance HASONEVALUE across M2M is a no no.
Hi @lee_hawthorn ,
I have worked on similar scenario before.
I avoid using CROSSFILTER, as it will mess up the RLS in your report.
Coming to the measures, I will need more detail on what kind of measures are you planning to create?
Can these measures be created just using related tables or you need tem to extend to non-direct related tables as well?
Thanks,
Pragati
Measures are all based on columns in the fact table and bridge. Simple Sum / Countrows type. Anything more complex is very memory intensive.
I can't really use indirect measures as I will run into circular references pretty quick.
Users apply many slicers though hence the query has mutiple FILTERS()
When the users builds a visual across mutiple M2M the visual for a table is very slow. I appreciate why this is the case i.e. user is asking for all data from lots of tables. This is not what Power BI is designed for..
I'm thinking of providing a pre-calculated table in the supporting lake i.e. to support one specific use case. I know this will be highly performant. I think M2M in Power BI is not suitable for these large detail type visuals. Summary visuals are fine though.
User | Count |
---|---|
77 | |
77 | |
68 | |
67 | |
49 |
User | Count |
---|---|
108 | |
105 | |
93 | |
83 | |
64 |