All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Convert GenerationTime into epoch format, then take the difference between the result and _indextime. index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | con... See more...
Convert GenerationTime into epoch format, then take the difference between the result and _indextime. index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | eval genEpoch = strptime(GenerationTime, "%Y-%m-%d %H:%M:%S") | eval genSecondsDifference = _indextime - genEpoch | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference, genSecondsDifference
Hi @karthi2809, to help you I need also the main search. Anyway, you should: create a main search putting in OR the three searches, correlate them using the stats command By the common key and a... See more...
Hi @karthi2809, to help you I need also the main search. Anyway, you should: create a main search putting in OR the three searches, correlate them using the stats command By the common key and adding values(field_name) As field_name for each field that you want to display. Ciao. Giuseppe
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Fail... See more...
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Failure File   | join CorrelationId type=left [ | search index=mulesoft applicationName IN (TEST) AND message IN ("*File put Succesfully*" ,"*successful Call*" , "*file processed successfully*" , "*Archive file processed successfully*" , "*processed successfully for file name*") | rename content.Filename as SuccessFileName correlationId as CorrelationId | table CorrelationId SuccessFileName | stats values(*) as * by CorrelationId] | table CorrelationId InterfaceName ApplicationName FileList SuccessFileName Timestamp | join CorrelationId type=left [ | search index=mulesoft applicationName IN (p-oracle-fin-processor , p-oracle-fin-processor-2 , p-wd-finance-api) AND priority IN (ERROR,WARN) | rename content.Filename as FailureFileName correlationId as CorrelationId timestamp as ErrorTimestamp content.ErrorType as ErrorType content.ErrorMsg as ErrorMsg | table FailureFileName CorrelationId ErrorType ErrorMsg ErrorTimestamp    
Use an eval command in each subsearch to set the Log_Source field. index=idx-esp source="*RUNINFO_HISTORY.*" | eval Log_Source = "Distributed" | rename STATUS as Status "COMP CODE" as CompCode APPL ... See more...
Use an eval command in each subsearch to set the Log_Source field. index=idx-esp source="*RUNINFO_HISTORY.*" | eval Log_Source = "Distributed" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 50 | append [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval Log_Source="Mainframe" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 50]  
Hello, @PickleRick . Your first scenario is right 
This requires more thorough debugging. You need to look into your splunkd.log on indexers and possibly on the CM and look for errors.
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as Com... See more...
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 10     Mainframe-     index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 10     I am trying to append both the tables into one, using something like this -     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 50 | append [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 50]      The issue is I am trying to add a column named "Log_Source" at the start which tells either Distributed or Mainframe for its corresponding result. I am not sure how to achieve it. Pls help.
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now... See more...
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now we want to redirect the users after successfull authentication to another url the only way forward I could find was via the Relaystate parameter below are all the combinations I tried for the Single Sign On (SSO) URL: https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?SAML_Request=<base64_SAML_Auth_Request>&RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https://redirectWebsite.com I keep getting the error error while parsing relaystate. failed to decode relaystate. any advise on how to embed the relaystate in the SSO
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes ... See more...
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes through the pipeline but sends it to the main instead of the index:   This is the spl2 of the pipeline: /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | eval index = if(isnull(index), "usa_windows", index) | into $destination;
@ITWhisperer, yes im trying to set a token based on the value has_runtime. Since i want to show some charts only if that particular data is present.  For this i am trying to create a token so that i ... See more...
@ITWhisperer, yes im trying to set a token based on the value has_runtime. Since i want to show some charts only if that particular data is present.  For this i am trying to create a token so that i can use this to show or hide the charts.
Hello,   I have been receiving the events without format and I have installed the addon in the HF and in cloud.
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _in... See more...
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _indextime  and  _time  but I would also like the seconds difference between GenerationTime (ie...2024-04-23 12:49:52)    and _indextime. index=splunk_index  sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference 
In short there will be a number of installs and configurations you will have to perform, follow this for Splunk Strem components. https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamAp... See more...
In short there will be a number of installs and configurations you will have to perform, follow this for Splunk Strem components. https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/AboutSplunkStream High Level Steps 1.Design your architect and the data flow - ensure all ports required can communicate 2. Ensure your device can send netflow data 3. Configure a Splunk index for the netflow data in Splunk 4. Configure Splunk HEC in Splunk and set it to the netflow index 5. Install and Configure the Splunk netflow componenets (ISF) this is where you would point the device to send its netflow data, which then sends to Splunk HEC. Follow this for Splunk HEC configuration https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector Flow this for your architecture - there is a good diagram that you need to consider for all the componenets https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/DeploymentArchitecture Example of stream in action https://lantern.splunk.com/Data_Descriptors/DNS_data/Installing_and_configuring_Splunk_Stream
yes all the indexers are up and running fine  So what will be the possible solution in order to mitigate the above issue?
@ITWhisperer  Thank you for your assistance. I've successfully tested the query you provided and managed to retrieve the expected success and failure counts. However, in addition to this, I also nee... See more...
@ITWhisperer  Thank you for your assistance. I've successfully tested the query you provided and managed to retrieve the expected success and failure counts. However, in addition to this, I also need to retrieve the step count for each step from Query 1. For instance, when I ran Query 1 individually for the last 30 days, I observed that 20 steps were extracted as a field along with the count of each step. Similarly, when I ran Query 2 for the same period, approximately 10 successes were extracted as a field with their respective counts. Likewise, when I ran Query 3, around 18 failures were extracted as a field with their counts. So, with the combined search you provided, I'm able to obtain a total of 18 fields comprising both successes and failures. This is because if any of the step fields have either a success or a failure, it reflects in the query output. However, the other two step fields don't have successes or failures, and their step information is not present in the output of the provided query. Therefore, we need to include the output of the first query, which includes the step field along with its count, in addition to the success and failure counts obtained from Queries 2 and 3. Since Query 1 captures all events, Query 2 captures successes, and Query 3 captures failures, we need to ensure that the first query's output is also included in the combined search. Could you please review and update accordingly? Thank you for your attention to this matter.
In the done handler, you only have access to the first row of the results, so you would only be able to set a token based on the first result. Is this what you are actually trying to do?
And each of your indexers is in different site? And all your indexers are online and OK? (BTW, this seems a bit overly complicated; unless you're planning on some further expansion of your environme... See more...
And each of your indexers is in different site? And all your indexers are online and OK? (BTW, this seems a bit overly complicated; unless you're planning on some further expansion of your environment, you could perfectly well get away with a single-site setup).
yes...
We have total 3 indexers and below is the site configuration Replication Factor ? origin:1, site1:1, site2:1, site3:1, total:3 Search Factor ? origin:1, total:2 Can you please help me with tha... See more...
We have total 3 indexers and below is the site configuration Replication Factor ? origin:1, site1:1, site2:1, site3:1, total:3 Search Factor ? origin:1, total:2 Can you please help me with that Thanks 
This appears to be a bug - I would log a support call for this. (You may need to uprade)