Find everything you need to get certified on Fabric—skills challenges, live sessions, exam prep, role guidance, and a 50 percent discount on exams.
Get startedEarn a 50% discount on the DP-600 certification exam by completing the Fabric 30 Days to Learn It challenge.
Hi,
I've been refreshing my .pbix data by loading very large avro files (generated every 1 hour) from my Azure Blob storage. However, I'd like to run another loading query where I can load 2-3 months of data with only one file per day.
Does anyone know a way to skip all other 23 files (i.e. 23 hours) during the ingestion to speed up the loading before starting to process with M-type to just select one per day?
Thank you.
M
Solved! Go to Solution.
Hi @miguelsus2000 ,
Based on your description, the advice that can be given is to just keep the source files you need in Azure Data Lake Storage, and the Power BI Data Lake Storage connector will only be able to connect to a specific file (files) without importing the entire folder (just paste the full file URL). related blog link
Refer to such similar solutions:
Solved: Select specific files from Azure Blob Storage - Microsoft Power BI Community
If the problem is still not resolved, please point it out. Looking forward to your feedback.
Best Regards,
Henry
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi @miguelsus2000 ,
Based on your description, the advice that can be given is to just keep the source files you need in Azure Data Lake Storage, and the Power BI Data Lake Storage connector will only be able to connect to a specific file (files) without importing the entire folder (just paste the full file URL). related blog link
Refer to such similar solutions:
Solved: Select specific files from Azure Blob Storage - Microsoft Power BI Community
If the problem is still not resolved, please point it out. Looking forward to your feedback.
Best Regards,
Henry
If this post helps, then please consider Accept it as the solution to help the other members find it more quickly.
Hi... Thank you for the response. The problem here is that it loads all 24 files per day prior to even start any Mtype processing. Is there any way to simply pick one file per day prior to loading or during the loading?
@miguelsus2000 , Create an M parameter and use that as a filter at start later remove that or change that in deployment pipeline
User | Count |
---|---|
96 | |
87 | |
78 | |
72 | |
68 |
User | Count |
---|---|
113 | |
105 | |
84 | |
65 | |
64 |