Register now to learn Fabric in free live sessions led by the best Microsoft experts. From Apr 16 to May 9, in English and Spanish.
Hi,
I want to use the "Correlation Plot" visual from the Power BI visuals store to get a better understanding of my dataset.
Correlation Plot
However, I'm if using measures in the plot, I'm uncertain at what level of granularity Power BI / DAX is computing the result and want to obtain a better understanding of what is used as the input for the correlation plot.
I primarily want to compute correlations at a individual customer level. I can obtain this by adding a calculated column to my customer table with the desired measure. However, this doesn't seem ideal to me, as I'm not planning to use the measures for filtering or categories purposes.
Thus... how can I ensure that the created measure I pass through to the Correlation Plot is computed at the desired granularity level? - i.e. customer level
The plot only accepts a column or measure as input, and I'm therefore uncertain if I can apply the same filtering logic that is suggested by Daxpatterns: Dax Patterns
IF( NOT(ISFILTERED( Customer_Key ) ), CALCULATE(X) ) )
Anyone know how to ensure the right results with a measure, or do I need to stick to a calculated column on my customer table?
@Sharon
Solved! Go to Solution.
Hi Lydia,
I believe that I've gotten the answer I needed from this blogpost: BPI Community Blog
The key for me was to understand what dataframe was loaded into the R visual when using measures, as measures are calculated differently depending on the granularity. To get the right dataframe loaded at the right granularity level, I need to use catagorical columns, which will return an error if I plot it directly into the R correlation plot avaliable form the store.
The blog describes the importance of this, and also shows how to remove the first three columns before the dataframe is loaded into the R visual.
This means I can use measures instead of calculated columns, thus optimizing performance as I don't need to add multiple columns to my customer or product table to compute my correlation at the right granularity level.
@Anonymous,
Could you please share dummy data of your table and post expected result here?
Regards,
Lydia
Hi Lydia,
I believe that I've gotten the answer I needed from this blogpost: BPI Community Blog
The key for me was to understand what dataframe was loaded into the R visual when using measures, as measures are calculated differently depending on the granularity. To get the right dataframe loaded at the right granularity level, I need to use catagorical columns, which will return an error if I plot it directly into the R correlation plot avaliable form the store.
The blog describes the importance of this, and also shows how to remove the first three columns before the dataframe is loaded into the R visual.
This means I can use measures instead of calculated columns, thus optimizing performance as I don't need to add multiple columns to my customer or product table to compute my correlation at the right granularity level.
Anyone able to share insights on how R-Scripts handles data from measures?
Covering the world! 9:00-10:30 AM Sydney, 4:00-5:30 PM CET (Paris/Berlin), 7:00-8:30 PM Mexico City
Check out the April 2024 Power BI update to learn about new features.
User | Count |
---|---|
111 | |
100 | |
80 | |
64 | |
58 |
User | Count |
---|---|
146 | |
110 | |
93 | |
84 | |
67 |