Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Earn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.

Reply
Invisibleman
Helper II
Helper II

Assign a table to a variable is that possible

Dear all,

 

For my Power BI, I have for each year a database/table and all those years I also have combined to 1 with the append function. Now as I would think, the smaller the database/table, the faster the calculations can be done. If I have a sum for 10.000 rows would be faster than the sum for 100.000 where 90.000 needed to be filtered out.

 

Now I am wondering if it is possible to use a VAR to define a table and with this VAR to create the calculations. Something like;

 

Measure 2 =
var tbl = "ZBI_SalesData_" & [_Highest Selected Year] // Which result in ZBI_SalesData_2021, ZBI_SalesData_2022 or ZBI_SalesData_2023
var result = CALCULATE(  tbl[Amt],  'mapCMEU GL Accounts'[Account] IN { "41110000" })
Return result
 
Can you please help me and let me know, if this is possible? And if so, can you help me with this?
 
Thanks,
Hans
1 ACCEPTED SOLUTION
PaulOlding
Solution Sage
Solution Sage

TL;DR - No, it's not possible to dynamically reference table and column names like this.

 

You sometimes see people using a SWITCH or IF statement to control which table is used in a calculation, giving something like:

SWITCH(
    SELECTEDVALUE(Date[Year]),
    "2021", CALCULATE(SUM(TableName_2021[Amt]),  'mapCMEU GL Accounts'[Account] IN { "41110000" }),
    "2022", CALCULATE(SUM(TableName_2022[Amt]),  'mapCMEU GL Accounts'[Account] IN { "41110000" }),
    "2023", CALCULATE(SUM(TableName_2023[Amt]),  'mapCMEU GL Accounts'[Account] IN { "41110000" }),
   etc...

However, this would be a bad idea.

Of course, it stands to reason that "a sum for 10.000 rows would be faster than the sum for 100.000".  At this volume of data though the difference is miniscule - too small to be perceptible for a human. 

The next factor to consider is to do with the internals of Vertipaq and it's two engines, Formula engine and Storage engine.  If you're interested the SQLBI guys have good content on it.  The short version is doing conditionals like SWITCH and IF requires extra work for the formula engine, which will add to the time the query takes.  Also, the Vertipaq engine in general is architected and optimized to do filter-table-and-sum-column type calculations so it's unlikely you'll ever need to do mangle your code like above to get good performance.

 

So, your strategy of appending all the tables into 1 is the way to go.  All the DAX will be simpler, it'll be easier to maintain, and I wouldn't expect you to get better performance by splitting the table up anyway.

View solution in original post

4 REPLIES 4
PaulOlding
Solution Sage
Solution Sage

TL;DR - No, it's not possible to dynamically reference table and column names like this.

 

You sometimes see people using a SWITCH or IF statement to control which table is used in a calculation, giving something like:

SWITCH(
    SELECTEDVALUE(Date[Year]),
    "2021", CALCULATE(SUM(TableName_2021[Amt]),  'mapCMEU GL Accounts'[Account] IN { "41110000" }),
    "2022", CALCULATE(SUM(TableName_2022[Amt]),  'mapCMEU GL Accounts'[Account] IN { "41110000" }),
    "2023", CALCULATE(SUM(TableName_2023[Amt]),  'mapCMEU GL Accounts'[Account] IN { "41110000" }),
   etc...

However, this would be a bad idea.

Of course, it stands to reason that "a sum for 10.000 rows would be faster than the sum for 100.000".  At this volume of data though the difference is miniscule - too small to be perceptible for a human. 

The next factor to consider is to do with the internals of Vertipaq and it's two engines, Formula engine and Storage engine.  If you're interested the SQLBI guys have good content on it.  The short version is doing conditionals like SWITCH and IF requires extra work for the formula engine, which will add to the time the query takes.  Also, the Vertipaq engine in general is architected and optimized to do filter-table-and-sum-column type calculations so it's unlikely you'll ever need to do mangle your code like above to get good performance.

 

So, your strategy of appending all the tables into 1 is the way to go.  All the DAX will be simpler, it'll be easier to maintain, and I wouldn't expect you to get better performance by splitting the table up anyway.

Hello Paul,

 

Thanks for the detailed information. I was thinking of this allready, but when searching the internet, I didn't saw a clear statement about this. But then I know, that I don't need to see/try this way further.

 

And I want to thank you for the help.

 

Regards,

Hans

Anonymous
Not applicable

@Invisibleman 
Could you please share your expected result with example?

Hello Rohit_joshi_10,

 

In this there is now not directly an expected result to show. But I have a couple of tables, with each multiple columns. And I just want to know, if it is possible to have an table being assigned to a variable. So that you can use the variable in any needed formula instead to use the actual table name and that then also is including all the columns from the original table. So it can be a sum, sumx, calculateor whatever command.

 

This so Power BI doesn't need to filter out the not needed rows, as I assume, it will work faster. When you can address the table directly and Power BI only needs to check/sort the 10.000 rows instead of the 100.000 rows.

 

Regards,

Hans

Helpful resources

Announcements
RTI Forums Carousel3

New forum boards available in Real-Time Intelligence.

Ask questions in Eventhouse and KQL, Eventstream, and Reflex.

MayPowerBICarousel

Power BI Monthly Update - May 2024

Check out the May 2024 Power BI update to learn about new features.