Splunk Search

Need help of review of 5 year old search to improve it.

stevenulbrich
Explorer

Hello - I have Splunk report that was generated 5 years ago.  I was looking for advice.  Can it be updated to work better?  It is currently running but seems to take long time to complete.

I tried to improve but not knowledgeable enough to make sure I will not screw it up.
So all insights would be greatly appreciated. 

  1. Need to make sure the contents of the lookup CVS generate a complete list of jobs.  It is important to see if a job is not running.  I assume this LEFT joint from the CSV to the results.
  2. Using the Frequency_mins to do the calculations.  It seems the original search is doing additional calculations.

I do apologize for such a large query.

 

index="idx_cibca_Application_prod" sourcetype = "tomcat:runtime:log:jpma" AND "lastUpdatedTS" OR "Time taken for" host=Server_1 OR host=Server_2 OR host=Server_3 OR host=Server_4 OR host=Server_5 OR host=Server_6 OR host=Server_7 OR host=Server_8 | eval Job_Thread_Name=case(like(_raw,"%tspPaymentArchiveExecutorIncrementalPoolSizeRange%") ,"tspPaymentArchiveExecutorIncrementalPoolSizeRange",like(_raw,"%completedTxnReplaceIncrementalPoolSizeRange%") ,"completedTxnReplaceIncrementalPoolSizeRange", like(_raw,"%completedTxnBackoutCdIncrementalPoolSizeRange%") ,"completedTxnBackoutCdIncrementalPoolSizeRange", like(_raw,"%completedTxnBackoutPdIncrementalPoolSizeRange%") ,"completedTxnBackoutPdIncrementalPoolSizeRange", like(_raw,"%oneYearCompletedTxnArchivalIncrementalExecutor%") ,"oneYearCompletedTxnArchivalIncrementalExecutor", like(_raw,"%twoYearCompletedTxnArchivalIncrementalExecutor%") ,"twoYearCompletedTxnArchivalIncrementalExecutor", like(_raw,"%fxRateExecutorIncrementalPoolSizeRange%") ,"fxRateExecutorIncrementalPoolSizeRange", like(_raw,"%mfAccountBalancePDExecutorIncrementalPoolSizeRange%") ,"mfAccountBalancePDExecutorIncrementalPoolSizeRange", like(_raw,"%achPaymentExecutorWithPoolSizeRange%") ,"achPaymentExecutorWithPoolSizeRange", like(_raw,"%achtemplateIncrementalExecutorWithPoolSizeRange%") ,"achtemplateIncrementalExecutorWithPoolSizeRange", like(_raw,"%tspPaymentExecutorIncrementalPoolSizeRange%") ,"tspPaymentExecutorIncrementalPoolSizeRange", like(_raw,"%tspTemplateExecutorIncrementalPoolSizeRange%") ,"tspTemplateExecutorIncrementalPoolSizeRange", like(_raw,"%acctEntitlementExecutorIncrementalPoolSizeRange%") ,"acctEntitlementExecutorIncrementalPoolSizeRange", like(_raw,"%achEntitlementExecutorIncrementalPoolSizeRange%") ,"achEntitlementExecutorIncrementalPoolSizeRange", like(_raw,"%acttxEntitlementExecutorIncrementalPoolSizeRange%") ,"acttxEntitlementExecutorIncrementalPoolSizeRange", like(_raw,"%atsAccountExecutorIncrementalPoolSizeRange%") ,"atsAccountExecutorIncrementalPoolSizeRange", like(_raw,"%completedTxnCDExecutorIncrementalPoolSizeRange%") ,"completedTxnCDExecutorIncrementalPoolSizeRange", like(_raw,"%completedTxnPDExecutorIncrementalPoolSizeRange%") ,"completedTxnPDExecutorIncrementalPoolSizeRange", like(_raw,"%accountGroupExecutorIncrementalPoolSizeRange%") ,"accountGroupExecutorIncrementalPoolSizeRange", like(_raw,"%accountNickNameExecutorIncrementalPoolSizeRange%") ,"accountNickNameExecutorIncrementalPoolSizeRange", like(_raw,"%accountRetentionExecutorIncrementalPoolSizeRange%") ,"accountRetentionExecutorIncrementalPoolSizeRange",like(_raw,"%mfAccountBalanceCDExecutorIncrementalPoolSizeRange%") ,"mfAccountBalanceCDExecutorIncrementalPoolSizeRange", like(_raw,"%pymtCutoffExecutorIncrementalPoolSizeRange%") ,"pymtCutoffExecutorIncrementalPoolSizeRange", like(_raw,"%achFileImportsExecutorWithPoolSizeRange%") ,"achFileImportsExecutorWithPoolSizeRange") 

| stats latest(_time) as _time , latest(host) as host by Job_Thread_Name

| eval Thread_Last_Executed=strftime(_time, "%Y-%m-%d %I:%M:%S %p"), EPOC_Time=(_time)

| eval  Lag=round((now()-EPOC_Time)/60)

| table Job_Thread_Name, Thread_Last_Executed, host, Lag

 

| lookup Application_Job_Thread_Name.csv Job_Thread_Name OUTPUTNEW Job_Name Job_Config_Name Frequency_Bucket_in_mins

| table  Job_Name, host, Job_Thread_Name, Job_Config_Name, Frequency_Bucket_in_mins, Thread_Last_Executed, Lag

| inputlookup  append=t Application_Job_Thread_Name.csv  

| dedup  Job_Name 

| table  Job_Name, host, Job_Thread_Name, Job_Config_Name, Frequency_Bucket_in_mins , Thread_Last_Executed, Lag  

| eval  Status=if(isnull(Lag), "NOT OK - Job not running", if(Lag<= if(Frequency_Bucket_in_mins>60, Frequency_Bucket_in_mins+10, 70),"OK","NOT OK - Job not running - Lag found"))

| join  type=left Job_Config_Name  [ search index="idx_cibca_Application_prod" sourcetype="tomcat:runtime:log:jpma" AND "Job Details job name:" host=Server_1 OR host=Server_2 OR host=Server_3 OR host=Server_4 OR host=Server_5 OR host=Server_6 OR host=Server_7 OR host=Server_8

| rex "Job Details job name:(?<Job_Config_Name>.*) status:(?<JOB_STATUS>.*) timetaken:(?<TIMETAKEN>.*) minutes" 

| rex "(?<DATE_TIME>^(\d+)-(\d+)-(\d+)(\s+)(\d+):(\d+):(\d+).(\d+))"

| stats  latest(DATE_TIME) AS Job_Status_Logged latest(JOB_STATUS) AS Job_Status, latest(TIMETAKEN) AS TIMETAKEN_IN_MINS by Job_Config_Name] 

| rename host as Thread_Host

| table  Job_Name, Thread_Host, Job_Thread_Name, Frequency_mins, Thread_Last_Executed,Lag,Status,Job_Status,Job_Status_Logged,TIMETAKEN_IN_MINS 

| eval  Job_Status_Logged = if(isnull(Job_Status_Logged),"NA",Job_Status_Logged), Job_Status = if(isnull(Job_Status),"NA",Job_Status), TIMETAKEN_IN_MINS = if(isnull(TIMETAKEN_IN_MINS),"NA",TIMETAKEN_IN_MINS)

 

CSV File content is:
Job_Config_Name,Job_Name,Job_Thread_Name,Frequency_mins
ach_payment_incremental_loader_task,ACH Payment,achPaymentExecutorWithPoolSizeRange,1
achTemplateIncrementalLoaderTask,ACH Tempate,achtemplateIncrementalExecutorWithPoolSizeRange,1
tsp_payment_incremental_loader_task,TSP Payments,tspPaymentExecutorIncrementalPoolSizeRange,1
tsp_template_incremental_loader_task,TSP Template,tspTemplateExecutorIncrementalPoolSizeRange,1
acct_entitlement_incremental_loader_task,Account Entitlement,acctEntitlementExecutorIncrementalPoolSizeRange,5
ach_entitlement_incremental_loader_task,ACH Entitlement,achEntitlementExecutorIncrementalPoolSizeRange,5
acttx_entitlement_incremental_loader_task,AT Entitlement,acttxEntitlementExecutorIncrementalPoolSizeRange,5
account_incremental_job,Account,atsAccountExecutorIncrementalPoolSizeRange,5
completed_txn_cd_incremental_job,Completed TXN CD,completedTxnCDExecutorIncrementalPoolSizeRange,5
completed_txn_pd_incremental_job,Completed TXN PD,completedTxnPDExecutorIncrementalPoolSizeRange,5
account_group_incremental_loader_task,Account Group,accountGroupExecutorIncrementalPoolSizeRange,1
account_nickname_incremental_job,Nick Name,accountNickNameExecutorIncrementalPoolSizeRange,5
account_retention_incremental_job,Retention,accountRetentionExecutorIncrementalPoolSizeRange,5
oneYearCompletedTxnArchiveIncrementalJob,One Year Retention,oneYearCompletedTxnArchivalIncrementalExecutor,60
twoYearCompletedTxnArchiveIncrementalJob,Two year Retention,twoYearCompletedTxnArchivalIncrementalExecutor,60
account_balance_cd_incremental_job,Balance Cd,mfAccountBalanceCDExecutorIncrementalPoolSizeRange,5
account_balance_pd_incremental_job,Balance PD,mfAccountBalancePDExecutorIncrementalPoolSizeRange,10
fxrate_incremental_loader_task,FX Rate,fxRateExecutorIncrementalPoolSizeRange,30
completed_txn_replace_incremental_loader_task,Replace,completedTxnReplaceIncrementalPoolSizeRange,480
completed_txn_backout_cd_incremental_loader_task,Backout CD,completedTxnBackoutCdIncrementalPoolSizeRange,360
completed_txn_backout_pd_incremental_loader_task,Backout PD,completedTxnBackoutPdIncrementalPoolSizeRange,360
tsp_payment_archive_incremental_loader_task,TSP payment Archive,tspPaymentArchiveExecutorIncrementalPoolSizeRange,10080
payment_cutoff_incremental_loader_task,Payment Cutoff,pymtCutoffExecutorIncrementalPoolSizeRange,1
ach_fileimports_incremental_loader_task,ACH File Import,achFileImportsExecutorWithPoolSizeRange,1

 

Labels (3)
0 Karma
1 Solution

to4kawa
Ultra Champion
index="idx_cibca_Application_prod" sourcetype="tomcat:runtime:log:jpma" AND ("Job Details job name:" OR "lastUpdatedTS" OR "Time taken for" ) host=Server_1 OR host=Server_2 OR host=Server_3 OR host=Server_4 OR host=Server_5 OR host=Server_6 OR host=Server_7 OR host=Server_8
| rex "(?<Job_Thread_Name>\S+Range)"
| rex "Job Details job name:(?<Job_Config_Name>.*) status:(?<JOB_STATUS>.*) timetaken:(?<TIMETAKEN>.*) minutes" 
| rex "(?<DATE_TIME>^(\d+)-(\d+)-(\d+)(\s+)(\d+):(\d+):(\d+).(\d+))" 
| lookup Application_Job_Thread_Name.csv Job_Thread_Name OUTPUTNEW Job_Name Job_Config_Name Frequency_Bucket_in_mins

I don't have the log, so I don't know if it's accurate, but I think it will be faster except for the join.

View solution in original post

stevenulbrich
Explorer

Thank you @to4kawa 
now I have to post a question on combining two searches.  I failed to realize the original search is two queries.

0 Karma

to4kawa
Ultra Champion
index="idx_cibca_Application_prod" sourcetype="tomcat:runtime:log:jpma" AND ("Job Details job name:" OR "lastUpdatedTS" OR "Time taken for" ) host=Server_1 OR host=Server_2 OR host=Server_3 OR host=Server_4 OR host=Server_5 OR host=Server_6 OR host=Server_7 OR host=Server_8
| rex "(?<Job_Thread_Name>\S+Range)"
| rex "Job Details job name:(?<Job_Config_Name>.*) status:(?<JOB_STATUS>.*) timetaken:(?<TIMETAKEN>.*) minutes" 
| rex "(?<DATE_TIME>^(\d+)-(\d+)-(\d+)(\s+)(\d+):(\d+):(\d+).(\d+))" 
| lookup Application_Job_Thread_Name.csv Job_Thread_Name OUTPUTNEW Job_Name Job_Config_Name Frequency_Bucket_in_mins

I don't have the log, so I don't know if it's accurate, but I think it will be faster except for the join.

View solution in original post

stevenulbrich
Explorer

Let me give a spin - Thank you @to4kawa 

0 Karma
Did you miss .conf21 Virtual?

Good news! The event's keynotes and many of its breakout sessions are now available online, and still totally FREE!