Hello, since 2018 our application has been logging to Azure Storage, in a single container, with "folders" broken down as:
My goal is to pull these logs (json) into Splunk, and so I've set up the Add-On and begun ingesting data... but it kept stopping at 2018... never getting to 20(19/20/21/22). Investigating why, after quite a bit of tinkering around, I found some internal logs that indicated
The number of blobs in container [containername] is 5000
Which... upon further research... is the maximum number of records returned without hitting a movenext marker because of forced paging with the API.
So... I mean I can go edit the python script myself... but is there another way/better way to do this or is a fix for this already in the works? And, if not and I make the change... is there a github or something I can submit the change to?