Hi,
I am currently using a scheduled search (or master search) that uses the Splunk REST API to get a list of specific saved searches and then uses the "map" command to run each one of them. Each of the saved searches writes some events into the same index.
I am sometimes seeing very strange results (e.g. incomplete data or duplicate data) which leads me to think that my scheduled master search is running into problems with the "map" command.
No skipped search warnings are received in the logs, and there is plenty of computing power in the Splunk instance I am using.
Does anyone here know of potential issues with the map command? Any other ideas how I can programmatically call saved searches in a sequence?
Not directly what you are after, but one way to check if it's the issue around running the saved searches inline with the map command, you could do something I have done to reschedule a saved search, causing it to run one minute after the call, still using the map command to run the operation, but rather directly running the search, it just schedules it, so it will then run in the normal scheduling window
| rest /servicesNS/nobody/my_app/saved/searches
| where title like "%_saved_search_pattern_%"
| fields title
| map maxsearches=100 search="
| curl debug=false method=post
uri="https://localhost:8089/servicesNS/nobody/my_app/saved/searches/$$title$$/reschedule" datafield="schedule_time=+1m" splunkauth=true
| table *
"
| rex field=curl_response_url ".*/(?<title>[^/]*)/reschedule"
| eval Status=case(curl_status=200,"OK",curl_status=400,"Bad Request",curl_status=401,"Unauthorized",curl_status=403,"Forbidden",true(),"HTTP Status ".curl_status)
| table title, Status
Note that this uses the Splunkbase Webtools app - also not that there is a bug in the POST for the current version where it will perform a get, not a post - you can fix curl.py for that.
Anyway, that worked for me and may help you investigate.
Oh, also this does not change the original scheduling time, just effectively triggers a run in a minute, but the next run will be according to the original schedule. Not sure what happens if the report is not scheduled in the first place.
Note that the title in XML escaped chevrons should be real chevrons, but the first time I posted this, it marked it as spam and removed the reply
Not directly what you are after, but one way to check if it's the issue around running the saved searches inline with the map command, you could do something I have done to reschedule a saved search, causing it to run one minute after the call, still using the map command to run the operation, but rather directly running the search, it just schedules it, so it will then run in the normal scheduling window
| rest /servicesNS/nobody/my_app/saved/searches
| where title like "%_saved_search_pattern_%"
| fields title
| map maxsearches=100 search="
| curl debug=false method=post
uri="https://localhost:8089/servicesNS/nobody/my_app/saved/searches/$$title$$/reschedule" datafield="schedule_time=+1m" splunkauth=true
| table *
"
| rex field=curl_response_url ".*/(?<title>[^/]*)/reschedule"
| eval Status=case(curl_status=200,"OK",curl_status=400,"Bad Request",curl_status=401,"Unauthorized",curl_status=403,"Forbidden",true(),"HTTP Status ".curl_status)
| table title, Status
Note that this uses the Splunkbase Webtools app - also not that there is a bug in the POST for the current version where it will perform a get, not a post - you can fix curl.py for that.
Anyway, that worked for me and may help you investigate.
Oh, also this does not change the original scheduling time, just effectively triggers a run in a minute, but the next run will be according to the original schedule. Not sure what happens if the report is not scheduled in the first place.