There can be another issue which might cause this error, the issue is explained below. If you mess up with input and output lookup fields then it can result in the same error. For example, consider a sample lookup file with fields: mac_id and the same field in events is mac_orig. mac_id = mac_orig and this should show up in lookup definition as: mac_id as mac_orig, If this order is reversed then the above error is seen.
... View more
After trying everything with reschedule i had to let go of it and as suggested by you Kamal_jagga i used what you have written as final solution to use cron_schedule instead of reschedule
... View more
Would be wise to add
index="_internal"
Also, this search returns both GET and POST events for all dashboards. In my opinion you should only count POST events for dashboards.
... View more
Apparently I ran into an issue specifically as my Prod Splunk infra is running on 6.4.0 and Lower environment on 6.5.
6.5 had only this much and it worked perfectly:
[mySourcetype]
INDEXED_EXTRACTIONS = json
KV_MODE = none
For 6.4 I had to follow what Gary has recommended. Many thanks to him for sharing his experience.
Here is my props. Mind you, if you are a beginner, you would love to know that Indexer is where you want to update this props as event breaking is a parsing step.
[mySourcetype]
INDEXED_EXTRACTIONS = json
KV_MODE = none
LINE_BREAKER = (){\"searchString
SHOULD_LINEMERGE = false
NO_BINARY_CHECK = true
... View more
I am in a similar situation and would like to know what you finally selected and went ahead with. Did you go ahead with what woodcock suggested.
... View more
index=_internal source=*metrics.log group=searchscheduler | timechart partial=false span=1m sum(dispatched) AS Started, sum(skipped) AS Skipped by splunk_server | table _time Started*
DISPATCHED --> in my opinion dispatched are always from CAPTAIN. You will have a better idea if you do this:
index=_internal sourcetype=splunkd component=Metrics group=searchscheduler host=splunksearchhead* | timechart span=1h sum(completed), sum(skipped) by host
and then see if the searches are getting distributed properly across search heads.
... View more
What was the time difference you checked the PDF and compared it with running the search manually on dashboard.
The issue can be summary index not getting filled (summary search can be delayed if there are a lot of saved searches running) and the summary index search is getting queued up and getting executed later than 6 AM.
Have you ever seen messages like: Maximum Concurrent Searches that can run on Splunk has reached?
This can be one cause of the problem.
... View more
It doesn't fix this. Am facing similar issue.
If you want just the log file then the latest of the splunk calls will have all the search events, so write the log file in write mode instead of append mode. That ways data won't be duplicated and you will have all the events.
My issue is I need to invoke a shell script on another host when my python script is called, so this issue will cause the remote script to be executed twice as well.
... View more