Skip to main content
cancel
Showing results for 
Search instead for 
Did you mean: 

Register now to learn Fabric in free live sessions led by the best Microsoft experts. From Apr 16 to May 9, in English and Spanish.

Reply
olofda
New Member

Native PowerBI data model vs SSAS Tabular: Change since new size limit announcement

Hi,

 

What are your views on the need for e.g tabular SSAS cubes after the upgrade that will make it able to upload 400GB of data to PowerBI? A lot of our data models are spanning from 20-400GB and with the current limitation of 10GB we are instead loading tabular SSAS cubes and connect PowerBI to those. But now I am thinking that we could just use the PowerBI native datamodel instead of going through a tabular cube server? I have not seen any disussion of the impact of this upgrade.

 

This was announced at the key note earlier this sping, about 27 minutes in.

https://youtu.be/8WhXCwHynEE

 

/Olof

4 REPLIES 4
vanessafvg
Super User
Super User

@olofda  my views are that i would first do a proof of concept and test to see if it works for your particular environment.

 

are you currently using direct query?  the difference would be loading the data into power bi directly, who else uses the ssas models, does it have mulitple users.  Do you expect your data grow, if you pushing it right to the max of its capacity, what buffer do you have for growth.

 

 





If I took the time to answer your question and I came up with a solution, please mark my post as a solution and /or give kudos freely for the effort 🙂 Thank you!

Proud to be a Super User!




@vanessafvg thanks for your reply!

 

We have a lot of different setups depending on the usecase and audience, spanning from a small group of central users with access to all data to thousands of users world wide with local acccesses. I am currently looking into how we can optimize our tabular ssas datamodels so that we can leverage live connections in a better way (today there is a lot of importing) but seeing this key note made me wonder if there could be an upside in using the native datamodels directly in PowerBI instead. For example, we would not need dedicated IT teams developing and maintaining SSAS cubes and models if we used extracts from our datalake directly to PowerBI which is something that could be maintained by the data champions and analysts at the business side. It seems to me that using such approach would be a lot neater in terms of "self-service". Is there something important that I am missing here?

 

Capacity and data growth is of course a very important part of the question, but let's assume that this is not an issue for the sake of the discussion.  


Thanks!

@olofda  i think what you asking is a big question and difficult to answer as that solution requires pushing those skills to the self service users, so training and skill levels are vita, so its much bigger than just shifting technology, one needs to understand all the variables of your enviroment.

 

the best thing to do in my mind is to have a pilot group, do an end to end solution using this method and see what it brings up.   

 

the benefit of a cube vs pulling those extracts from the data lake everytime is you have a built in layer that has already interpreted all the business rules, so to recreate that everytime you create a power bi model (unless you use that model as a source to other models, but scalability might be an issue).

 

what are the issues you are experincing with the current SSAS?





If I took the time to answer your question and I came up with a solution, please mark my post as a solution and /or give kudos freely for the effort 🙂 Thank you!

Proud to be a Super User!




Hi,

 

Thank you for your input and sorry for my late reply.

 

Yes I know it is a big question 🙂 I wouldn't say we have any issues with the current setup, I am more curious about if it is time to re-think the SSAS tabular models with the introduction of the 400GB native datamodels.

Right now we have dedicated IT resources who help us create tabular cubes according ot the business needs and requirements, but this is a long process. If we would use the native datamodels to become more self-serviced I completely agree that it would require pushing those skills onto the users, but here we would have a dedicated team on the business side who already are data champions and well connected with our data architecture. The benefit would be that we would much quicker be able to create new models and sources of analyses.

 

I guess what I am wondering is if there are some clear pros and cons with the two different setups from a technical perspective rather than the impact on knowledge and maintainability from a team/people/resource perspective. Makes sense?

 

Thanks again!

 

/Olof

 

 

 

 

Helpful resources

Announcements
Microsoft Fabric Learn Together

Microsoft Fabric Learn Together

Covering the world! 9:00-10:30 AM Sydney, 4:00-5:30 PM CET (Paris/Berlin), 7:00-8:30 PM Mexico City

PBI_APRIL_CAROUSEL1

Power BI Monthly Update - April 2024

Check out the April 2024 Power BI update to learn about new features.

April Fabric Community Update

Fabric Community Update - April 2024

Find out what's new and trending in the Fabric Community.