Some wired thing got happen into the server. splunkd was running but the splunk web was down. A restart made everything normal. But i need to know why the splunk web was down. Have got nothing suspicious in splunkd.log and web_server.log.
Is there any way to know why the wplunk web was down??
Check the internal index for the logs in webservice.log.
Do you see anything prior to the stopping ?
Otherwise, if you are on linux, check the /var/log/messages for any "Out Of Memory / OOM" events, the system can kill a process.
I have checked the messages. I have found that...
sisidsdaemon invoked oom-killer: gfpmask=0x201da, order=0, oomadj=0, oomscoreadj=0
and after that..
Out of memory: Kill process 3936 (python) score 730 or sacrifice child
So looks like splunk web got down due to out of memeory...
I have understood that the root cause is Out of memory but is there any process to check the memory consumption by the search in Splunk?
Yeah that I know...but Sos was not installed in the splunk server when the issue got happened...thats why asking that by seeing eventcount, totalruntime etc can we understand the memory consumption...or least can we understand the relation between memory consumption with evventcount or totalruntime?