Knowledge Management

Combine the result of two tables into one and add a column displaying the name of the table for each row

man03359
Communicator

I have two queries which is giving me two tables, naming Distributed & Mainframe as below -

Distributed-

 

 

index=idx-esp source="*RUNINFO_HISTORY.*"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 10

 

 

Mainframe-

 

 

index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 10

 

 

I am trying to append both the tables into one, using something like this -

 

 

index=idx-esp source="*RUNINFO_HISTORY.*"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 50
| append
    [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 50]

 

 

 The issue is I am trying to add a column named "Log_Source" at the start which tells either Distributed or Mainframe for its corresponding result. I am not sure how to achieve it. Pls help.

Labels (2)
Tags (2)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Use an eval command in each subsearch to set the Log_Source field.

index=idx-esp source="*RUNINFO_HISTORY.*"
| eval Log_Source = "Distributed"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 50
| append
    [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval Log_Source="Mainframe"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 50]

 

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Use an eval command in each subsearch to set the Log_Source field.

index=idx-esp source="*RUNINFO_HISTORY.*"
| eval Log_Source = "Distributed"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 50
| append
    [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval Log_Source="Mainframe"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 50]

 

---
If this reply helps you, Karma would be appreciated.
0 Karma

man03359
Communicator

@richgalloway 

It worked 🙂 thanks a lot 🙂

Get Updates on the Splunk Community!

Wrapping Up Cybersecurity Awareness Month

October might be wrapping up, but for Splunk Education, cybersecurity awareness never goes out of season. ...

🌟 From Audit Chaos to Clarity: Welcoming Audit Trail v2

🗣 You Spoke, We Listened  Audit Trail v2 wasn’t written in isolation—it was shaped by your voices.  In ...

What's New in Splunk Observability - October 2025

What’s New?    We’re excited to announce the latest enhancements to Splunk Observability Cloud and share ...