I just installed splunk 4.1.6 on a host as a lightweight forwarder. I have added 1 log file to monitor in inputs.conf.
Currently I am not able to see this new host in the main search head. From the host where i installed splunk, I am able to telnet to my main search head so it can reach it. I also copied the forwarder.license to splunk.license after I was done the setup and restarted so it has the forwarder license.
After looking in some logs I am now seeing these errors:
02-08-2011 19:15:01.011 WARN SavedSplunker - Maximum number (1) of concurrent scheduled searches reached. 1 ready-to-run scheduled searches pending.
There are multiple lines of these errors.
The host that is running splunk as the lightweight forwarder has the OS CentOS 5.5.
Any ideas? I only have it looking at one log and I didn't schedule any searches. I was looking at limits.conf and authorize.conf but I am still questioning things. I am wondering is it a user thing? Perhaps the splunk user I created doesn't have admin privileges? I'm kind of stumped.
after reading this post: Getting error “maximum number of concurrent searches has been reached”
I looked at limits.conf and authorize.conf and the numbers there were still high. The max_searches_per_cpu was set to 2, which is the default, but I didn't increase the number yet.
This still doesn't answer the question on why the scheduler can't do 2 instead of 1. It would be great if it could allow two of the same ID in order for searches on a 5 minute schedule that run long. The searches are on totally different time periods but the scheduler stops the next search. What if that search had critical data, why isn't this variable configurable?
Adding on this might be the wrong thread.
03-03-2015 16:01:03.313 -0600 WARN SavedSplunker - Max alive instance_count=1 reached for savedsearch_id="test"
Why can't we adjust Max alive instance_count?
The warning has to do with scheduled searches, not total searches. There is another setting in limits.conf under
max_searches_perc, which limits the max number of scheduled searches to 25% of the total allowed number of searches.
max_searches_per_cpu are both 4, which means your max on a two-core machine would be 12 (4+(4*2)). If they're just set to 2, then your max searches would be just 4 (2+(2*2)), and 25% of 4 is just one (1) scheduled search.
I changed max_searches_per_cpu to 4. I didn't see base_max_searches in my limits.conf. I restarted and the error has gone away, but I still do not see this host in the main splunk console. I now see the error DatabaseDirectoryManager - Cannot access /opt/splunk/var/lib/splunk/sample/db/hot_v1_0/splunk-need-optimize.dat. Ignoring file