Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and more.
Get startedGrow your Fabric skills and prepare for the DP-600 certification exam by completing the latest Microsoft Fabric challenge.
Evening all.
We have our data warehouse in big query, and have a massive transaction table with milions of items recorded every month. I don't think I want to use direct query due to the pricing structure with GCP, and I also have a requirement for a low grain (customer, date,time,product,discount) meaning I can't aggregate and have a single source for rolling up reporting.
While we have power BI premium, I think the data is just too bloody big. Has anyone got suggestions on how to approach this?
#GoogleBigQuery #Transactions #Billion #Billions
Hi @brookbracewell3 ,
I understand that you have a large and complex data warehouse in BigQuery and you want to use Power BI to query and visualize it. There are some best practices that can help you optimize your performance and cost when working with Power BI and BigQuery.
Some of the best practices are:
Best practices when working with Power Query - Power Query | Microsoft Learn
Solved: Best Practice of Google BigQuery in Power BI: how ... - Microsoft Fabric Community
BigQuery Best Practices | Google Cloud - Community (medium.com)
How to Get Your Question Answered Quickly
If it does not help, please provide more details.
Best Regards
Community Support Team _ Rongtie
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
The solutions I am looing for (potentially) are around xmla end point partition loading, and the best tolling to do this. Maybe using combined data sets allowing my massive customer dimension to be loaded seperatetly to my massive fact table- I am already "best practised up" on the schema