All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello  Yes for that i used stats values of filed name .But i cant able to seperate the error and succes file This is my new query : index=mulesoft environment=* (applicationName IN ("Test... See more...
Hi @gcusello  Yes for that i used stats values of filed name .But i cant able to seperate the error and succes file This is my new query : index=mulesoft environment=* (applicationName IN ("Test")) | stats values(content.FileList{}) as FileList values(content.FileName) as Filename values(content.Filename) as filename1 min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time BY correlationId applicationName | eval Status=case(priority="ERROR","ERROR", priority="WARN","WARN", priority!="ERROR","SUCCESS") | eval SuccessFileName=mvdedup(mvfilter(match(message, "%succesfully*") OR match(message, "Summary of all Batch*") ) )|eval SuccessFileName= coalesce(Filename,filename1) | eval FailureFileName=mvdedup(mvfilter(match(priority, "WARN") OR match(priority, "ERROR") ) )|eval FailureFileName= coalesce(Filename,filename1)|table SuccessFileName FailureFileName  
Hi @pichertklaus, this can be possible if you have many events or if you have few resources in your HFs and IDXs. At first,I hint to use an rsyslog (or syslog-ng) server to receive syslogs, so you ... See more...
Hi @pichertklaus, this can be possible if you have many events or if you have few resources in your HFs and IDXs. At first,I hint to use an rsyslog (or syslog-ng) server to receive syslogs, so you take the syslogs also if Splunk is down or overbooked. Then, how many events are you receiving by syslog? which resources have your servers?. Ciao. Giuseppe
Hello @auzark , You can assign a particular field to _indextime and then use that to find the difference. The only catch here would be that _indextime would be in epoch time and hence, you'll have t... See more...
Hello @auzark , You can assign a particular field to _indextime and then use that to find the difference. The only catch here would be that _indextime would be in epoch time and hence, you'll have to convert the GenerationTime into epoch format before calculating the difference. Your query should look something like below: index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | eval indexTime = _indextime | eval GenerationTime_epoch=strptime(GenerationTime,"%Y-%m-%d %H"%M:%S") | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=indexTime-_time | eval GenTimeDifferenceInSeconds = GenerationTime_epoch-indexTime | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference,GenTimeDifferenceInSeconds   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated!!
Hi All, We have a strange problem here. On a Linux syslog server, the logs from different systems are each saved as a file. These files are monitored by Splunk UF and forwarded to two heavy forwar... See more...
Hi All, We have a strange problem here. On a Linux syslog server, the logs from different systems are each saved as a file. These files are monitored by Splunk UF and forwarded to two heavy forwarders to be saved on the indexer. We have now noticed that the number of events in the Splunk index sometimes differs from the syslog data delivered, sometimes events are missing in the middle. Since reports and alerts are configured on the Splunk data, it is of course essential that ALL events arrive in Splunk. Is such a behavior known, where can I find how many events have been processed on the HFs, for example? Regards Klaus    
Convert GenerationTime into epoch format, then take the difference between the result and _indextime. index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | con... See more...
Convert GenerationTime into epoch format, then take the difference between the result and _indextime. index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | eval genEpoch = strptime(GenerationTime, "%Y-%m-%d %H:%M:%S") | eval genSecondsDifference = _indextime - genEpoch | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference, genSecondsDifference
Hi @karthi2809, to help you I need also the main search. Anyway, you should: create a main search putting in OR the three searches, correlate them using the stats command By the common key and a... See more...
Hi @karthi2809, to help you I need also the main search. Anyway, you should: create a main search putting in OR the three searches, correlate them using the stats command By the common key and adding values(field_name) As field_name for each field that you want to display. Ciao. Giuseppe
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Fail... See more...
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Failure File   | join CorrelationId type=left [ | search index=mulesoft applicationName IN (TEST) AND message IN ("*File put Succesfully*" ,"*successful Call*" , "*file processed successfully*" , "*Archive file processed successfully*" , "*processed successfully for file name*") | rename content.Filename as SuccessFileName correlationId as CorrelationId | table CorrelationId SuccessFileName | stats values(*) as * by CorrelationId] | table CorrelationId InterfaceName ApplicationName FileList SuccessFileName Timestamp | join CorrelationId type=left [ | search index=mulesoft applicationName IN (p-oracle-fin-processor , p-oracle-fin-processor-2 , p-wd-finance-api) AND priority IN (ERROR,WARN) | rename content.Filename as FailureFileName correlationId as CorrelationId timestamp as ErrorTimestamp content.ErrorType as ErrorType content.ErrorMsg as ErrorMsg | table FailureFileName CorrelationId ErrorType ErrorMsg ErrorTimestamp    
Use an eval command in each subsearch to set the Log_Source field. index=idx-esp source="*RUNINFO_HISTORY.*" | eval Log_Source = "Distributed" | rename STATUS as Status "COMP CODE" as CompCode APPL ... See more...
Use an eval command in each subsearch to set the Log_Source field. index=idx-esp source="*RUNINFO_HISTORY.*" | eval Log_Source = "Distributed" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 50 | append [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval Log_Source="Mainframe" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 50]  
Hello, @PickleRick . Your first scenario is right 
This requires more thorough debugging. You need to look into your splunkd.log on indexers and possibly on the CM and look for errors.
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as Com... See more...
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 10     Mainframe-     index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 10     I am trying to append both the tables into one, using something like this -     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 50 | append [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 50]      The issue is I am trying to add a column named "Log_Source" at the start which tells either Distributed or Mainframe for its corresponding result. I am not sure how to achieve it. Pls help.
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now... See more...
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now we want to redirect the users after successfull authentication to another url the only way forward I could find was via the Relaystate parameter below are all the combinations I tried for the Single Sign On (SSO) URL: https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?SAML_Request=<base64_SAML_Auth_Request>&RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https://redirectWebsite.com I keep getting the error error while parsing relaystate. failed to decode relaystate. any advise on how to embed the relaystate in the SSO
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes ... See more...
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes through the pipeline but sends it to the main instead of the index:   This is the spl2 of the pipeline: /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | eval index = if(isnull(index), "usa_windows", index) | into $destination;
@ITWhisperer, yes im trying to set a token based on the value has_runtime. Since i want to show some charts only if that particular data is present.  For this i am trying to create a token so that i ... See more...
@ITWhisperer, yes im trying to set a token based on the value has_runtime. Since i want to show some charts only if that particular data is present.  For this i am trying to create a token so that i can use this to show or hide the charts.
Hello,   I have been receiving the events without format and I have installed the addon in the HF and in cloud.
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _in... See more...
I would like some help creating a report that will show the seconds diff between my event timestamp and the Splunk landing timestamp. I have the below query which will give me the diff between _indextime  and  _time  but I would also like the seconds difference between GenerationTime (ie...2024-04-23 12:49:52)    and _indextime. index=splunk_index  sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference 
In short there will be a number of installs and configurations you will have to perform, follow this for Splunk Strem components. https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamAp... See more...
In short there will be a number of installs and configurations you will have to perform, follow this for Splunk Strem components. https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/AboutSplunkStream High Level Steps 1.Design your architect and the data flow - ensure all ports required can communicate 2. Ensure your device can send netflow data 3. Configure a Splunk index for the netflow data in Splunk 4. Configure Splunk HEC in Splunk and set it to the netflow index 5. Install and Configure the Splunk netflow componenets (ISF) this is where you would point the device to send its netflow data, which then sends to Splunk HEC. Follow this for Splunk HEC configuration https://docs.splunk.com/Documentation/SplunkCloud/latest/Data/UsetheHTTPEventCollector Flow this for your architecture - there is a good diagram that you need to consider for all the componenets https://docs.splunk.com/Documentation/StreamApp/8.1.1/DeployStreamApp/DeploymentArchitecture Example of stream in action https://lantern.splunk.com/Data_Descriptors/DNS_data/Installing_and_configuring_Splunk_Stream
yes all the indexers are up and running fine  So what will be the possible solution in order to mitigate the above issue?
@ITWhisperer  Thank you for your assistance. I've successfully tested the query you provided and managed to retrieve the expected success and failure counts. However, in addition to this, I also nee... See more...
@ITWhisperer  Thank you for your assistance. I've successfully tested the query you provided and managed to retrieve the expected success and failure counts. However, in addition to this, I also need to retrieve the step count for each step from Query 1. For instance, when I ran Query 1 individually for the last 30 days, I observed that 20 steps were extracted as a field along with the count of each step. Similarly, when I ran Query 2 for the same period, approximately 10 successes were extracted as a field with their respective counts. Likewise, when I ran Query 3, around 18 failures were extracted as a field with their counts. So, with the combined search you provided, I'm able to obtain a total of 18 fields comprising both successes and failures. This is because if any of the step fields have either a success or a failure, it reflects in the query output. However, the other two step fields don't have successes or failures, and their step information is not present in the output of the provided query. Therefore, we need to include the output of the first query, which includes the step field along with its count, in addition to the success and failure counts obtained from Queries 2 and 3. Since Query 1 captures all events, Query 2 captures successes, and Query 3 captures failures, we need to ensure that the first query's output is also included in the combined search. Could you please review and update accordingly? Thank you for your attention to this matter.
In the done handler, you only have access to the first row of the results, so you would only be able to set a token based on the first result. Is this what you are actually trying to do?