Knowledge Management

Combine the result of two tables into one and add a column displaying the name of the table for each row

man03359
Communicator

I have two queries which is giving me two tables, naming Distributed & Mainframe as below -

Distributed-

 

 

index=idx-esp source="*RUNINFO_HISTORY.*"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 10

 

 

Mainframe-

 

 

index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 10

 

 

I am trying to append both the tables into one, using something like this -

 

 

index=idx-esp source="*RUNINFO_HISTORY.*"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 50
| append
    [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 50]

 

 

 The issue is I am trying to add a column named "Log_Source" at the start which tells either Distributed or Mainframe for its corresponding result. I am not sure how to achieve it. Pls help.

Labels (2)
Tags (2)
0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

Use an eval command in each subsearch to set the Log_Source field.

index=idx-esp source="*RUNINFO_HISTORY.*"
| eval Log_Source = "Distributed"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 50
| append
    [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval Log_Source="Mainframe"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 50]

 

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Use an eval command in each subsearch to set the Log_Source field.

index=idx-esp source="*RUNINFO_HISTORY.*"
| eval Log_Source = "Distributed"
| rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP")
| dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime
| table ScheduleDate  JobName StartTime EndTime Duration
| sort 0 - Duration
| head 50
| append
    [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*"
| eval Log_Source="Mainframe"
| eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S")
| eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S")
| eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time)
| eval DurationSecs=floor(UpdatedTime - epoc_start_time)
| eval Duration=tostring(DurationSecs,"duration")
| table ScheduleDate  JOB_MEM_NAME StartTime EndTime Duration
| sort 0 - Duration
| head 50]

 

---
If this reply helps you, Karma would be appreciated.
0 Karma

man03359
Communicator

@richgalloway 

It worked 🙂 thanks a lot 🙂

Get Updates on the Splunk Community!

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...

Splunk MCP & Agentic AI: Machine Data Without Limits

Discover how the Splunk Model Context Protocol (MCP) Server can revolutionize the way your organization uses ...