Sometimes the fix is right there in the documentation itself: https://docs.splunk.com/Documentation/AddOns/released/AWS/Troubleshooting I fixed the issue by updating splunk-launch.conf file and adding the custom PORT for Management. The latest version of Aws add on doesn't work with custom management port. It only works on 8089.
... View more
We faced a similar issue while upgrading to 6.0.2. It turned out not because of FIPS, but due to the upgrade the SSL Cert was expired Below logs are from mongod.log and not splunkd.log: <TIMESTAMP> I CONTROL [signalProcessingThread] shutting down with code:0 <TIMESTAMP> W CONTROL [main] net.ssl.sslCipherConfig is deprecated. It will be removed in a future release. <TIMESTAMP> F NETWORK [main] The provided SSL certificate is expired or not yet valid. <TIMESTAMP> F - [main] Fatal Assertion 28652 at src/mongo/util/net/ssl_manager.cpp 1157 Fix to above is to rename the server.pem to server.pem.old and restart splunk and rerun the installation. We were able to reach mongod.log because of KV Store error messages coming up in the SH. Hope this helps someone spending 3-4 hours to fix such a trivial upgrade issue.
... View more
Have you checked the ownership of directories: under $SPLUNK_HOME/var/run/dispatch. it might be the case that ownership is root and you are running splunk as splunk user. This can happen when a user runs splunk as root by mistake and then later on runs as splunk. This is what happened in our case.
... View more
There can be another issue which might cause this error, the issue is explained below. If you mess up with input and output lookup fields then it can result in the same error. For example, consider a sample lookup file with fields: mac_id and the same field in events is mac_orig. mac_id = mac_orig and this should show up in lookup definition as: mac_id as mac_orig, If this order is reversed then the above error is seen.
... View more
After trying everything with reschedule i had to let go of it and as suggested by you Kamal_jagga i used what you have written as final solution to use cron_schedule instead of reschedule
... View more
Would be wise to add
index="_internal"
Also, this search returns both GET and POST events for all dashboards. In my opinion you should only count POST events for dashboards.
... View more
Apparently I ran into an issue specifically as my Prod Splunk infra is running on 6.4.0 and Lower environment on 6.5.
6.5 had only this much and it worked perfectly:
[mySourcetype]
INDEXED_EXTRACTIONS = json
KV_MODE = none
For 6.4 I had to follow what Gary has recommended. Many thanks to him for sharing his experience.
Here is my props. Mind you, if you are a beginner, you would love to know that Indexer is where you want to update this props as event breaking is a parsing step.
[mySourcetype]
INDEXED_EXTRACTIONS = json
KV_MODE = none
LINE_BREAKER = (){\"searchString
SHOULD_LINEMERGE = false
NO_BINARY_CHECK = true
... View more
I am in a similar situation and would like to know what you finally selected and went ahead with. Did you go ahead with what woodcock suggested.
... View more
index=_internal source=*metrics.log group=searchscheduler | timechart partial=false span=1m sum(dispatched) AS Started, sum(skipped) AS Skipped by splunk_server | table _time Started*
DISPATCHED --> in my opinion dispatched are always from CAPTAIN. You will have a better idea if you do this:
index=_internal sourcetype=splunkd component=Metrics group=searchscheduler host=splunksearchhead* | timechart span=1h sum(completed), sum(skipped) by host
and then see if the searches are getting distributed properly across search heads.
... View more
What was the time difference you checked the PDF and compared it with running the search manually on dashboard.
The issue can be summary index not getting filled (summary search can be delayed if there are a lot of saved searches running) and the summary index search is getting queued up and getting executed later than 6 AM.
Have you ever seen messages like: Maximum Concurrent Searches that can run on Splunk has reached?
This can be one cause of the problem.
... View more
It doesn't fix this. Am facing similar issue.
If you want just the log file then the latest of the splunk calls will have all the search events, so write the log file in write mode instead of append mode. That ways data won't be duplicated and you will have all the events.
My issue is I need to invoke a shell script on another host when my python script is called, so this issue will cause the remote script to be executed twice as well.
... View more