I'm running the search below:
index=windows sourcetype=windows (username=123456 event_id=4624) OR ("Account Name: 123456" event_id=4634) | bin _time as mday span=1d | eval logon=if(event_id==4624,_time,null()) | eval logoff=if(event_id==4634,_time,null()) | stats min(logon) as logon, max(logoff) as logoff, count(logon) as clogon, count(logoff) as clogoff by mday | where clogon>1 OR clogoff> 1 | convert ctime(logon) ctime(mday) ctime(logoff) | fields - clogon clogoff
The cut over is 14 days, and since I'm looking for results in January, February and early March all data is coming from hadoop.
The search is taking longer than 2/3 hours, it is trawling over 15*10^9 events for 1 month.
The same search for the last 13 days takes less than 1 minute in splunk.
I have looked at the search, application and container logs but did not see anything that looked obvious.
My understanding is that the map reduce job in hadoop only uses the time stamp, all the filtering happens in splunk is this correct?
Where is the bottleneck for this search to take much longer than when searching exclusively in splunk?
Thanks in advance for any pointers.
This document can help debug these issues: http://docs.splunk.com/Documentation/Splunk/latest/HadoopAnalytics/TroubleshootSplunkAnalyticsforHad...
Are you running in Verbose mode?
Are you able to access the Hadoop logs to examine the performance of Hadoop itself?
Thanks for the response and the link.
It is not that I am seeing errors when the search runs, the issue is the time it takes to complete.
It could be an expectations issue, searches using hadoop do take longer, but I was not expecting the observed difference (for the same time interval and similar number of events).
Also, I was expecting some wiggle room for tuning and improving performance, which I haven't found yet.
I do have access to yarn, and have checked the application logs. All complete successfully...
Should I be checking anything in particular?