I have a strange problem, which I would like to share, hopping maybe someone knows what is going on or even might now a fix.
I have a report with a dataset of around 44 tables, most of the tables having between 20,000 to 200,000 rows. 4 Tables have over 1,000,000 rows. The tables in the dataset are sourced from CSV-files (and a few from Excel-Sheets), which are located in an Office 365 Sharepoint folder.
I know a database or Azure Storage could be a better fit, but it was demanded by the customer to create a solution which is entirely build with Office 365 resources.
The whole Sharepoint folder is imported into Power Query by a parameter pointing to it and then loading the whole folder in one query. Every table is transformed as needed by referencing a "Content" column of a row in this folder query. Only the table queries are enabled for loading to the Power BI report.
Until now everything went well and I build a report containing around 50 pages. Then I did export the report as template. When I open the template it asks for the parameter to choose the Sharepoint folder and then starts loading the data.
Depending on the internet connection, loading can take up to 3 hours at the moment. But it finishes without errors. However, when I check the data in the report, generated from the template, and the report, that was the origin of the template, I find that some tables in the newly loaded report have missing rows. When I select these tables in Power BI and hit refresh from the context menu, they reload and finally have the correct number of rows.
I don’t know where this comes from and the tables with the missing rows are not always the same nor the largest tables in every report generated from the template.
Does someone have an idea about that or did someone experience a similar behavior? I disabled all relationship, background data and parallel loading options for the report file.