Knowledge Management

Combine the result of two tables into one and add a column displaying the name of the table for each row

man03359
Communicator

I have two queries which is giving me two tables, naming Distributed & Mainframe as below -

Distributed-

 

 

index=idx-esp source="*RUNINFO_HISTORY.*"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 10

 

 

Mainframe-

 

 

index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 10

 

 

I am trying to append both the tables into one, using something like this -

 

 

index=idx-esp source="*RUNINFO_HISTORY.*"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 50
| append
    [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 50]

 

 

 The issue is I am trying to add a column named "Log_Source" at the start which tells either Distributed or Mainframe for its corresponding result. I am not sure how to achieve it. Pls help.

Labels (2)
Tags (2)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Use an eval command in each subsearch to set the Log_Source field.

index=idx-esp source="*RUNINFO_HISTORY.*"
| eval Log_Source = "Distributed"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 50
| append
    [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval Log_Source="Mainframe"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 50]

 

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Use an eval command in each subsearch to set the Log_Source field.

index=idx-esp source="*RUNINFO_HISTORY.*"
| eval Log_Source = "Distributed"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 50
| append
    [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval Log_Source="Mainframe"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 50]

 

---
If this reply helps you, Karma would be appreciated.
0 Karma

man03359
Communicator

@richgalloway 

It worked 🙂 thanks a lot 🙂

Get Updates on the Splunk Community!

Splunk Platform | Upgrading your Splunk Deployment to Python 3.9

Splunk initially announced the removal of Python 2 during the release of Splunk Enterprise 8.0.0, aiming to ...

From Product Design to User Insights: Boosting App Developer Identity on Splunkbase

co-authored by Yiyun Zhu & Dan Hosaka Engaging with the Community at .conf24 At .conf24, we revitalized the ...

Detect and Resolve Issues in a Kubernetes Environment

We’ve gone through common problems one can encounter in a Kubernetes environment, their impacts, and the ...