All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please share the query. Please tell us what "not working" means.  What results do you get and how do those results not meet expectations?
In that case you could rework your search so that it has either zero or 1 row depending on whether the condition is met, and set your token based on the number of results returned.
Your choices are to work on the F5 LB, speak to your network team for the VIP/pool/failover/irules config and test it out as to work works best.(Its not my area of experteise, I'm just concept aware)... See more...
Your choices are to work on the F5 LB, speak to your network team for the VIP/pool/failover/irules config and test it out as to work works best.(Its not my area of experteise, I'm just concept aware) Note:The Splunk UF is not a load balancer in the networking sense (It contains Auto Data Loadbalancing function to spray the data across multiple indexers if you have miltiples of them, and to even the data out, its not desinged failover based on load to another UF). The UF is an agent to collect data and send it to Splunk.
Hi @karthi2809, after a stats command you have only the fields present in the stats command, so in your case you don't have priority and message fields that you would use in the evals after the stat... See more...
Hi @karthi2809, after a stats command you have only the fields present in the stats command, so in your case you don't have priority and message fields that you would use in the evals after the stats. Locate the eval before the stats and add the related fields to the stats. Ciao. Giuseppe
Hi @pichertklaus, if you're sure that in the files (written bu syslog-ng) there are all the events, you have to search in the Splunk inputs. Are you sure that there aren't duplicated data in these ... See more...
Hi @pichertklaus, if you're sure that in the files (written bu syslog-ng) there are all the events, you have to search in the Splunk inputs. Are you sure that there aren't duplicated data in these files? because Splunk doesn't index twice duplicated data, event if having different file names. Then, are you sure about the parsing? if the missed events are in the first 12 days of each mont, maybe there's a timestamp parsing issue related to the timestamp format. di you checked if there are some events grouped in the same events? maybe the issue is in the event breaking rules. Ciao. Giuseppe
Hi, I'd like to use a text box input field to add a string value into a multiselect in order to use multiselect token to filter out values currently in multiselect (with true) for each search query I... See more...
Hi, I'd like to use a text box input field to add a string value into a multiselect in order to use multiselect token to filter out values currently in multiselect (with true) for each search query I use <input type="text" token="filter_out_text_input" id="filter_out_text_input"> <label>Enter a log event you want to filter out</label> <prefix>"*</prefix> <suffix>*"</suffix> </input> <input type="multiselect" token="filter_out_option" id="filter_out_option"> <label>List to filter out log events</label> <valuePrefix>NOT "*</valuePrefix> <valueSuffix>*"</valueSuffix> <delimiter> OR </delimiter> </input>   . . . <title>$app$ Error Frequency</title> <chart> <search> <query>index="$app$-$env$" logLevel="ERROR" $filter_out_option$ $filter_out_text_input$ | eval filter_out_option="$filter_out_option$" | where isnotnull(filter_out_option) AND filter_out_option!="" | eval filter_out_text_input="$filter_out_text_input$" | where isnotnull(filter_out_text_input) AND filter_out_text_input!="" | multikv | eval ReportKey="error rate" | timechart span=30m count by ReportKey</query> <earliest>$time_range.earliest$</earliest> <latest>$time_range.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>1m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.chart">area</option> <option name="charting.chart.nullValueMode">connect</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">default</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">1</option> <option name="refresh.display">progressbar</option> </chart>   I would like to filter out error strings for the above search  Thanks in advance
Running into this same issue.  Tried calls to both sales and their partner program and have not received a call back in 5+ business days.  
We are running a syslog-ng system which receives the data from various appliances. From what I can tell, on the syslog server itself all data is stored in files per sending host/date and the event c... See more...
We are running a syslog-ng system which receives the data from various appliances. From what I can tell, on the syslog server itself all data is stored in files per sending host/date and the event count matches the event count on the generating host. We checked some random samples for accuracy. So the syslog server itself seem not to be the limit. sudo syslog-ng-ctl query get "source.*" source.s_udp514.processed=844024 source.s_tcp514.processed=11100270 source.s_tcp1514.processed=3150959 Syslog Server: 2 CPUs, 8GB RAM We are running 2 Heavy Forwarders which receive the date from the Universal Forwarder installed on the syslog-ng server and sendig them to 6 Splunk Indexers. As we are not operating the HFs/IDXs I cannot say much about their sizing    
I'm trying to use an outer join but I am not getting the desired output. Looks like the query in the left has less events than the sub search query.  Could that be the reason for outer join not worki... See more...
I'm trying to use an outer join but I am not getting the desired output. Looks like the query in the left has less events than the sub search query.  Could that be the reason for outer join not working. I can't use STATS because both the queries have multiple indexes & sourcetypes. 
How can I create a custom table in Splunk view that stores some user credentials and How can I create a button that opens the new record form using which users can submit the information in splunk?I ... See more...
How can I create a custom table in Splunk view that stores some user credentials and How can I create a button that opens the new record form using which users can submit the information in splunk?I have attached an image for reference.
I want to add a download/export button which I am able to do so but the issue is the result of the csv is also visible in the panel like below. I want to show only the download button while hiding th... See more...
I want to add a download/export button which I am able to do so but the issue is the result of the csv is also visible in the panel like below. I want to show only the download button while hiding the results panel which I am not able to do.   <row> <panel> <table> <search> <done> <eval token="date">strftime(now(), "%d-%m-%Y")</eval> <set token="sid">$job.sid$</set> </done> <query>index=test</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> <html> <a href="/api/search/jobs/$sid$/results?isDownload=true&amp;timeFormat=%25FT%25T.%25Q%25%3Az&amp;maxLines=0&amp;count=0&amp;filename=test_$date$.csv&amp;outputMode=csv" class="button js-button">Download</a> <style> .button { background-color: steelblue; border-radius: 5px; color: white; padding: .5em; text-decoration: none; } .button:focus, .button:hover { background-color: #2A4E6C; color: White; } </style> </html> </panel> </row>
Adding the 'ess_user' Role: To edit and create a new 'Incident Review' while still in the 'user' role, you need to add the 'ess_user' role to your current user role. This is necessary because we hav... See more...
Adding the 'ess_user' Role: To edit and create a new 'Incident Review' while still in the 'user' role, you need to add the 'ess_user' role to your current user role. This is necessary because we have set capabilities related to 'ess_user', which are required for this task. The 'ess_user' should be given the following capabilities: - edit_notable_events: This allows the role to create new (ad-hoc) Notable Events and edit existing ones. - edit_log_review_settings: This permits the role to edit Incident Review settings. By adding these capabilities, you should be able to edit and create a new 'Incident Review'. Configuring Permissions in Splunk Enterprise Security: This can be done by navigating to Configure -> General -> Permission in Splunk Enterprise Security. Ensure the 'ess_user' is given the following permissions: - Create New Notable Events - Edit Incident Review - Edit Notable Events Note: The 'ess_analyst' role can be directly assigned to a user, enabling them to manage Incident Review dashboards. A user with 'ess_analyst' must be able to edit notable events.
Hello.   We are deploying a new search head in our splunk environment. We are using windows 2019 servers as platform. The nearch head is not working. We can see these errors on the indexer:   WAR... See more...
Hello.   We are deploying a new search head in our splunk environment. We are using windows 2019 servers as platform. The nearch head is not working. We can see these errors on the indexer:   WARN BundleDataProcessor [12404 TcpChannelThread] - Failed to create file E:\Splunk\var\run\searchpeers\[search_head_hostname]-1713866571.e035b54cfcafb33b.tmp\apps\TA-microsoft-graph-security-add-on-for-splunk\bin\ta_microsoft_graph_security_add_on_for_splunk\aob_py2\cloudconnectlib\splunktacollectorlib\data_collection\ta_checkpoint_mng.py while untarring E:\Splunk\var\run\searchpeers\[search_head_hostname]-1713866571.bundle: The system cannot find the path specified. The file name (including the path) exceeds the limit of 260 characters on  windows OS. How can we use this addon?  
Hi @gcusello  Yes for that i used stats values of filed name .But i cant able to seperate the error and succes file This is my new query : index=mulesoft environment=* (applicationName IN ("Test... See more...
Hi @gcusello  Yes for that i used stats values of filed name .But i cant able to seperate the error and succes file This is my new query : index=mulesoft environment=* (applicationName IN ("Test")) | stats values(content.FileList{}) as FileList values(content.FileName) as Filename values(content.Filename) as filename1 min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time BY correlationId applicationName | eval Status=case(priority="ERROR","ERROR", priority="WARN","WARN", priority!="ERROR","SUCCESS") | eval SuccessFileName=mvdedup(mvfilter(match(message, "%succesfully*") OR match(message, "Summary of all Batch*") ) )|eval SuccessFileName= coalesce(Filename,filename1) | eval FailureFileName=mvdedup(mvfilter(match(priority, "WARN") OR match(priority, "ERROR") ) )|eval FailureFileName= coalesce(Filename,filename1)|table SuccessFileName FailureFileName  
Hi @pichertklaus, this can be possible if you have many events or if you have few resources in your HFs and IDXs. At first,I hint to use an rsyslog (or syslog-ng) server to receive syslogs, so you ... See more...
Hi @pichertklaus, this can be possible if you have many events or if you have few resources in your HFs and IDXs. At first,I hint to use an rsyslog (or syslog-ng) server to receive syslogs, so you take the syslogs also if Splunk is down or overbooked. Then, how many events are you receiving by syslog? which resources have your servers?. Ciao. Giuseppe
Hello @auzark , You can assign a particular field to _indextime and then use that to find the difference. The only catch here would be that _indextime would be in epoch time and hence, you'll have t... See more...
Hello @auzark , You can assign a particular field to _indextime and then use that to find the difference. The only catch here would be that _indextime would be in epoch time and hence, you'll have to convert the GenerationTime into epoch format before calculating the difference. Your query should look something like below: index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | eval indexTime = _indextime | eval GenerationTime_epoch=strptime(GenerationTime,"%Y-%m-%d %H"%M:%S") | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=indexTime-_time | eval GenTimeDifferenceInSeconds = GenerationTime_epoch-indexTime | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference,GenTimeDifferenceInSeconds   Thanks, Tejas. --- If the above solution helps, an upvote is appreciated!!
Hi All, We have a strange problem here. On a Linux syslog server, the logs from different systems are each saved as a file. These files are monitored by Splunk UF and forwarded to two heavy forwar... See more...
Hi All, We have a strange problem here. On a Linux syslog server, the logs from different systems are each saved as a file. These files are monitored by Splunk UF and forwarded to two heavy forwarders to be saved on the indexer. We have now noticed that the number of events in the Splunk index sometimes differs from the syslog data delivered, sometimes events are missing in the middle. Since reports and alerts are configured on the Splunk data, it is of course essential that ALL events arrive in Splunk. Is such a behavior known, where can I find how many events have been processed on the HFs, for example? Regards Klaus    
Convert GenerationTime into epoch format, then take the difference between the result and _indextime. index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | con... See more...
Convert GenerationTime into epoch format, then take the difference between the result and _indextime. index=splunk_index sourcetype=splunk_sourcetype | eval tnow = now() | convert ctime(tnow) | convert ctime(_indextime) as Index_Time | eval secondsDifference=_indextime-_time | eval genEpoch = strptime(GenerationTime, "%Y-%m-%d %H:%M:%S") | eval genSecondsDifference = _indextime - genEpoch | table Node EventNumber GenerationTime Index_Time, _time, secondsDifference, genSecondsDifference
Hi @karthi2809, to help you I need also the main search. Anyway, you should: create a main search putting in OR the three searches, correlate them using the stats command By the common key and a... See more...
Hi @karthi2809, to help you I need also the main search. Anyway, you should: create a main search putting in OR the three searches, correlate them using the stats command By the common key and adding values(field_name) As field_name for each field that you want to display. Ciao. Giuseppe
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Fail... See more...
Hi All, I have field called filename .SO i want to populate the result from the filename field and i created two joins to separate. Is there any other way without using Join Success File and Failure File   | join CorrelationId type=left [ | search index=mulesoft applicationName IN (TEST) AND message IN ("*File put Succesfully*" ,"*successful Call*" , "*file processed successfully*" , "*Archive file processed successfully*" , "*processed successfully for file name*") | rename content.Filename as SuccessFileName correlationId as CorrelationId | table CorrelationId SuccessFileName | stats values(*) as * by CorrelationId] | table CorrelationId InterfaceName ApplicationName FileList SuccessFileName Timestamp | join CorrelationId type=left [ | search index=mulesoft applicationName IN (p-oracle-fin-processor , p-oracle-fin-processor-2 , p-wd-finance-api) AND priority IN (ERROR,WARN) | rename content.Filename as FailureFileName correlationId as CorrelationId timestamp as ErrorTimestamp content.ErrorType as ErrorType content.ErrorMsg as ErrorMsg | table FailureFileName CorrelationId ErrorType ErrorMsg ErrorTimestamp