All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We are running a syslog-ng system which receives the data from various appliances. From what I can tell, on the syslog server itself all data is stored in files per sending host/date and the event c... See more...
We are running a syslog-ng system which receives the data from various appliances. From what I can tell, on the syslog server itself all data is stored in files per sending host/date and the event count matches the event count on the generating host. We checked some random samples for accuracy. So the syslog server itself seem not to be the limit. sudo syslog-ng-ctl query get "source.*" source.s_udp514.processed=844024 source.s_tcp514.processed=11100270 source.s_tcp1514.processed=3150959 Syslog Server: 2 CPUs, 8GB RAM We are running 2 Heavy Forwarders which receive the date from the Universal Forwarder installed on the syslog-ng server and sendig them to 6 Splunk Indexers. As we are not operating the HFs/IDXs I cannot say much about their sizing    
I'm trying to use an outer join but I am not getting the desired output. Looks like the query in the left has less events than the sub search query.  Could that be the reason for outer join not worki... See more...
I'm trying to use an outer join but I am not getting the desired output. Looks like the query in the left has less events than the sub search query.  Could that be the reason for outer join not working. I can't use STATS because both the queries have multiple indexes & sourcetypes. 
How can I create a custom table in Splunk view that stores some user credentials and How can I create a button that opens the new record form using which users can submit the information in splunk?I ... See more...
How can I create a custom table in Splunk view that stores some user credentials and How can I create a button that opens the new record form using which users can submit the information in splunk?I have attached an image for reference.
I want to add a download/export button which I am able to do so but the issue is the result of the csv is also visible in the panel like below. I want to show only the download button while hiding th... See more...
I want to add a download/export button which I am able to do so but the issue is the result of the csv is also visible in the panel like below. I want to show only the download button while hiding the results panel which I am not able to do.   <row> <panel> <table> <search> <done> <eval token="date">strftime(now(), "%d-%m-%Y")</eval> <set token="sid">$job.sid$</set> </done> <query>index=test</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> <html> <a href="/api/search/jobs/$sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=test_$date$.csv&amp;outputMode=csv" class="button js-button">Download</a> <style> .button { background-color: steelblue; border-radius: 5px; color: white; padding: .5em; text-decoration: none; } .button:focus, .button:hover { background-color: #2A4E6C; color: White; } </style> </html> </panel> </row>
Adding the 'ess_user' Role: To edit and create a new 'Incident Review' while still in the 'user' role, you need to add the 'ess_user' role to your current user role. This is necessary because we hav... See more...
Adding the 'ess_user' Role: To edit and create a new 'Incident Review' while still in the 'user' role, you need to add the 'ess_user' role to your current user role. This is necessary because we have set capabilities related to 'ess_user', which are required for this task. The 'ess_user' should be given the following capabilities: - edit_notable_events: This allows the role to create new (ad-hoc) Notable Events and edit existing ones. - edit_log_review_settings: This permits the role to edit Incident Review settings. By adding these capabilities, you should be able to edit and create a new 'Incident Review'. Configuring Permissions in Splunk Enterprise Security: This can be done by navigating to Configure -> General -> Permission in Splunk Enterprise Security. Ensure the 'ess_user' is given the following permissions: - Create New Notable Events - Edit Incident Review - Edit Notable Events Note: The 'ess_analyst' role can be directly assigned to a user, enabling them to manage Incident Review dashboards. A user with 'ess_analyst' must be able to edit notable events.
Hello.   We are deploying a new search head in our splunk environment. We are using windows 2019 servers as platform. The nearch head is not working. We can see these errors on the indexer:   WAR... See more...
Hello.   We are deploying a new search head in our splunk environment. We are using windows 2019 servers as platform. The nearch head is not working. We can see these errors on the indexer:   WARN BundleDataProcessor [12404 TcpChannelThread] - Failed to create file E:\Splunk\var\run\searchpeers\[search_head_hostname]-1713866571.e035b54cfcafb33b.tmp\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\ta_microsoft_graph_security_add_on_for_splunk\aob_py2\cloudconnectlib\splunktacollectorlib\data_collection\ta_checkpoint_mng.py while untarring E:\Splunk\var\run\searchpeers\[search_head_hostname]-1713866571.bundle: The system cannot find the path specified. The file name (including the path) exceeds the limit of 260 characters on  windows OS. How can we use this addon?  
Hi @gcusello  Yes for that i used stats values of filed name .But i cant able to seperate the error and succes file This is my new query : index=mulesoft environment=* (applicationName IN ("Test... See more...
Hi @gcusello  Yes for that i used stats values of filed name .But i cant able to seperate the error and succes file This is my new query : index=mulesoft environment=* (applicationName IN ("Test")) | stats values(content.FileList{}) as FileList values(content.FileName) as Filename values(content.Filename) as filename1 min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time BY correlationId applicationName | eval Status=case(priority="ERROR","ERROR", priority="WARN","WARN", priority!="ERROR","SUCCESS") | eval SuccessFileName=mvdedup(mvfilter(match(message, "%succesfully*") OR match(message, "Summary of all Batch*") ) )|eval SuccessFileName= coalesce(Filename,filename1) | eval FailureFileName=mvdedup(mvfilter(match(priority, "WARN") OR match(priority, "ERROR") ) )|eval FailureFileName= coalesce(Filename,filename1)|table SuccessFileName FailureFileName  
Hi @pichertklaus, this can be possible if you have many events or if you have few resources in your HFs and IDXs. At first,I hint to use an rsyslog (or syslog-ng) server to receive syslogs, so you ... See more...
Hi @pichertklaus, this can be possible if you have many events or if you have few resources in your HFs and IDXs. At first,I hint to use an rsyslog (or syslog-ng) server to receive syslogs, so you take the syslogs also if Splunk is down or overbooked. Then, how many events are you receiving by syslog? which resources have your servers?. Ciao. Giuseppe
Hello @auzark , You can assign a particular field to _indextime and then use that to find the difference. The only catch here would be that _indextime would be in epoch time and hence, you'll have t... See more...
Hello @auzark , You can assign a particular field to _indextime and then use that to find the difference. The only catch here would be that _indextime would be in epoch time and hence, you'll have to convert the GenerationTime into epoch format before calculating the difference. Your query should look something like below: index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | eval indexTime = _indextime | eval GenerationTime_epoch=strptime(GenerationTime,"%Y-%m-%d %H"%M:%S") | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=indexTime-_time | eval GenTimeDifferenceInSeconds = GenerationTime_epoch-indexTime | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference,GenTimeDifferenceInSeconds   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated!!
Hi All, We have a strange problem here. On a Linux syslog server, the logs from different systems are each saved as a file. These files are monitored by Splunk UF and forwarded to two heavy forwar... See more...
Hi All, We have a strange problem here. On a Linux syslog server, the logs from different systems are each saved as a file. These files are monitored by Splunk UF and forwarded to two heavy forwarders to be saved on the indexer. We have now noticed that the number of events in the Splunk index sometimes differs from the syslog data delivered, sometimes events are missing in the middle. Since reports and alerts are configured on the Splunk data, it is of course essential that ALL events arrive in Splunk. Is such a behavior known, where can I find how many events have been processed on the HFs, for example? Regards Klaus    
Convert GenerationTime into epoch format, then take the difference between the result and _indextime. index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | con... See more...
Convert GenerationTime into epoch format, then take the difference between the result and _indextime. index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | eval genEpoch = strptime(GenerationTime, "%Y-%m-%d %H:%M:%S") | eval genSecondsDifference = _indextime - genEpoch | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference, genSecondsDifference
Hi @karthi2809, to help you I need also the main search. Anyway, you should: create a main search putting in OR the three searches, correlate them using the stats command By the common key and a... See more...
Hi @karthi2809, to help you I need also the main search. Anyway, you should: create a main search putting in OR the three searches, correlate them using the stats command By the common key and adding values(field_name) As field_name for each field that you want to display. Ciao. Giuseppe
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Fail... See more...
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Failure File   | join CorrelationId type=left [ | search index=mulesoft applicationName IN (TEST) AND message IN ("*File put Succesfully*" ,"*successful Call*" , "*file processed successfully*" , "*Archive file processed successfully*" , "*processed successfully for file name*") | rename content.Filename as SuccessFileName correlationId as CorrelationId | table CorrelationId SuccessFileName | stats values(*) as * by CorrelationId] | table CorrelationId InterfaceName ApplicationName FileList SuccessFileName Timestamp | join CorrelationId type=left [ | search index=mulesoft applicationName IN (p-oracle-fin-processor , p-oracle-fin-processor-2 , p-wd-finance-api) AND priority IN (ERROR,WARN) | rename content.Filename as FailureFileName correlationId as CorrelationId timestamp as ErrorTimestamp content.ErrorType as ErrorType content.ErrorMsg as ErrorMsg | table FailureFileName CorrelationId ErrorType ErrorMsg ErrorTimestamp    
Use an eval command in each subsearch to set the Log_Source field. index=idx-esp source="*RUNINFO_HISTORY.*" | eval Log_Source = "Distributed" | rename STATUS as Status "COMP CODE" as CompCode APPL ... See more...
Use an eval command in each subsearch to set the Log_Source field. index=idx-esp source="*RUNINFO_HISTORY.*" | eval Log_Source = "Distributed" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 50 | append [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval Log_Source="Mainframe" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 50]  
Hello, @PickleRick . Your first scenario is right 
This requires more thorough debugging. You need to look into your splunkd.log on indexers and possibly on the CM and look for errors.
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as Com... See more...
I have two queries which is giving me two tables, naming Distributed & Mainframe as below - Distributed-     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 10     Mainframe-     index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 10     I am trying to append both the tables into one, using something like this -     index=idx-esp source="*RUNINFO_HISTORY.*" | rename STATUS as Status "COMP CODE" as CompCode APPL as ScheduleName NODE_GROUP as AgentName NODE_ID as HostName CPU_TIME as CPU-Time | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | eval Source=if(like(source,"%RUNINFO_HISTORY%"),"Control-M","ESP") | dedup Source ScheduleName ScheduleDate AgentName HostName JobName StartTime EndTime | table ScheduleDate JobName StartTime EndTime Duration | sort 0 - Duration | head 50 | append [search index=idx-esp source="*RUNINFO_HISTORY.*" DATA_CENTER=CPUA JOB_ID="0*" | eval epoc_end_time=strptime(EndTime,"%Y-%m-%d %H:%M:%S") | eval epoc_start_time=strptime(StartTime,"%Y-%m-%d %H:%M:%S") | eval UpdatedTime=if(isnull(epoc_end_time),_indextime,epoc_end_time) | eval DurationSecs=floor(UpdatedTime - epoc_start_time) | eval Duration=tostring(DurationSecs,"duration") | table ScheduleDate JOB_MEM_NAME StartTime EndTime Duration | sort 0 - Duration | head 50]      The issue is I am trying to add a column named "Log_Source" at the start which tells either Distributed or Mainframe for its corresponding result. I am not sure how to achieve it. Pls help.
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now... See more...
Hi all I have a question about using relaystate with SAML when using Azure Ad B2C as the Idp we successfully managed to integrate Splunk as SP with AD B2C as Idp using SAML and Custom policies Now we want to redirect the users after successfull authentication to another url the only way forward I could find was via the Relaystate parameter below are all the combinations I tried for the Single Sign On (SSO) URL: https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?SAML_Request=<base64_SAML_Auth_Request>&RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https%3A%2F%2FredirectWebsite.com https://<tenant-id>.b2clogin.com/<tenant-id>.onmicrosoft.com/B2C_1A_signup_signin/samlp/sso/login?RelayState=https://redirectWebsite.com I keep getting the error error while parsing relaystate. failed to decode relaystate. any advise on how to embed the relaystate in the SSO
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes ... See more...
Good morning I am receiving events from windows on a collector with Splunk Edge Processor and it is sending them correctly to the tenant but not to the correct index. According to the data it goes through the pipeline but sends it to the main instead of the index:   This is the spl2 of the pipeline: /* A valid SPL2 statement for a pipeline must start with "$pipeline", and include "from $source" and "into $destination". */ $pipeline = | from $source | eval index = if(isnull(index), "usa_windows", index) | into $destination;
@ITWhisperer, yes im trying to set a token based on the value has_runtime. Since i want to show some charts only if that particular data is present.  For this i am trying to create a token so that i ... See more...
@ITWhisperer, yes im trying to set a token based on the value has_runtime. Since i want to show some charts only if that particular data is present.  For this i am trying to create a token so that i can use this to show or hide the charts.