Register now to learn Fabric in free live sessions led by the best Microsoft experts. From Apr 16 to May 9, in English and Spanish.
Hello there,
Before everything a big thanks, for any help received.
Well, I'm not a BI developer, I'm actually a C# game developer adventuring here.
I have an extensive data set, from one of the games I'm developing, registering some data about the gameplay for each match for each user.
With a small data set, I could flatten all the matches, as I needed, with the code:
my issues appear applying this to the real production data, which has more than 26k users (in this code, the total rows there, and then each user could have N number of matches recorded).
The query runs(is running) for more than 1hour, and I don't know the end of it.
Is there a way to optimize this first query, just flattening the records into one big table that I'll use later to generate insights ?
Thanks in advance
There's probably a way to optimize your M code, but in your situation where you're in charge of the app, I'd flatten the JSON into one or more tables (depending on data structure) in a staging database which I'd use as the source for Power BI. Power Query is convenient but it's not fast. Also you won't be able to do incremental refresh with a file source so things will only get worse as your dataset grows over time.
Covering the world! 9:00-10:30 AM Sydney, 4:00-5:30 PM CET (Paris/Berlin), 7:00-8:30 PM Mexico City
Check out the April 2024 Power BI update to learn about new features.
User | Count |
---|---|
109 | |
106 | |
87 | |
75 | |
66 |
User | Count |
---|---|
126 | |
112 | |
99 | |
81 | |
73 |