I have to show real time data on my dashboard. And for that, i am using power bi rest api to push data into it in historical data analysis mode. I am able to push data using a rest api push url provided while creating streaming data. But i need to delete all the data before pushing each single row. I am doing this because i want to use all visualizations which are available for historical data analysis mode only. But i dont need historical data, so i want to delete all data before pushing new row. But Its showing 401 Unauthorized error while doing deletion using python's requests library. Do we have to mention group id , dataset id and table name seperatly or the push url generated will work? I am getting push URL from below screen.
>>But i need to delete all the data before pushing each single row
Current streaming dataset has no underlying data, it used receive data pushed data without other operations.
As with the streaming dataset, with the PubNub streaming dataset there is no underlying database in Power BI, so you cannot build report visuals against the data that flows in, and cannot take advantage of report functionality such as filtering, custom visuals, and so on.
In my opinion, if you not need historical data, you can simply turn off it.
When Historic data analysis is disabled (it is disabled by default), you create a streaming dataset as described earlier in this article. When Historic data analysis is enabled, the dataset created becomes both a streaming dataset and a push dataset. This is equivalent to using the Power BI REST APIs to create a dataset with its defaultMode set to pushStreaming, as described earlier in this article.
Is there an update to this? Is this roadmaped? Being able to delete certain rows from a streaming data set with history enabled has been on my xmas wish list for a while. Or, to be able to empty a dataset through the web UI.
Simple no-auth post calls into streaming datasets is super useful, versatile and easy. Yet, from my personal experience, it does feel extra silly to delete and re-create an entire dataset, simply to correct mistakes or undo accidental duplicate posts. I've done it a bunch of times...