All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Try something like this | eval successtime=if(status=200,_time,null()) | streamstats range(successtime) as successrange count(successtime) as successcount window=3 by status global=f | where success... See more...
Try something like this | eval successtime=if(status=200,_time,null()) | streamstats range(successtime) as successrange count(successtime) as successcount window=3 by status global=f | where successcount=3 and successrange > 10
Thank you its working
I'm currently building my own home instance and I'm having some trouble with my UF.   So far I've : installed the latest / correct version for my Ubuntu - Linux system sudo chown -RP splunk:splun... See more...
I'm currently building my own home instance and I'm having some trouble with my UF.   So far I've : installed the latest / correct version for my Ubuntu - Linux system sudo chown -RP splunk:splunk /opt/splunkforwarder/ searched through SplunkForwarder.service to see if the correct user is applied (which it is) tried re-installing and running   ./splunk enable boot-start​ as splunk user, and as root.   When using the splunk user, I have to authenticate as root anyway but i get the same results for both   ./splunk start   results in "Done" after authentication   ./splunk status   results in: Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk:splunkfwd /opt/splunkforwarder" Couldn't change ownership for /opt/splunkforwarder/etc : Operation not permitted splunkd is not running.   ./splunk enable boot-start   results in: " A systemd unit file already exists at path ="/etc/systemd/system/SplunkForwarder.service". To add a Splunk generated systemd unit file, run 'splunk disable boot-start' before running this command. If there are custom settings that have been added to the unit file, create a backup copy first. It seems no matter which account I use or which user has permissions, I'm unable to have access to any of the files under "/opt/splunkforwarder" nor am I able to start the UF itself or configure boot-start.
Thanks for your reply on this. We were seeing JSON array, which Splunk failed to recognize and make it searchable. We were using Lambda for transformation and one change to Firehose configuration fr... See more...
Thanks for your reply on this. We were seeing JSON array, which Splunk failed to recognize and make it searchable. We were using Lambda for transformation and one change to Firehose configuration from "Raw" to "Event" for the field "Splunk End Point" helped resolve the issue.  Also, I change the source type "aws:cloudwatch" to based on the tests written for lambda. https://github.com/splunk/splunk-aws-cloudwatch-streaming-metrics-processor/blob/main/SplunkAWSCloudWatchStreamingMetricsProcessor/test_lambda_function.py  It will be good if the documentation - Source types for the Splunk Add-on for AWS - Splunk Documentation also can be updated to say the source type "aws:cloudwatch" to be used if Lambda function - splunk-aws-cloudwatch-streaming-metrics-processor is used for streaming. This request can be closed with above comments. 
| where count >= 10 AND count <=19 Then trigger your alert if there are any results
Where doesn't support wildcards in this way, try using search instead of where
Hello everyone, I am looking for a Splunk search query to get the duration time of three sequential response code 200. It is not about average time or duration of one message but if three Success me... See more...
Hello everyone, I am looking for a Splunk search query to get the duration time of three sequential response code 200. It is not about average time or duration of one message but if three Success message responses taken totally more than 10 seconds. Thanks in advance.
Technically, yes it is possible (probably), but it is not simple. Visualisations work on series i.e. all points / bars from the same series are shown in the same colour, so what you could do is dupli... See more...
Technically, yes it is possible (probably), but it is not simple. Visualisations work on series i.e. all points / bars from the same series are shown in the same colour, so what you could do is duplicate the incoming series and in one copy of the series set the value to zero if the out going count is create than zero and in the other series set the value to zero if the outgoing count is zero. Using random generated values, this demonstrates what I mean | gentimes start=-1 increment=1h | rename starttime as _time | fields _time | eval incoming=random()%10 | eval outgoing=random()%10 | eval unprocessed = if(outgoing > 0, 0, incoming) | eval incoming = if(outgoing > 0, incoming, 0)  
Hi  Sorry, I want to create an input (free text) on the field "JOBNAME" which is extracted via rex.  Is it possible?  Below input is working fine when I put a job name in the free_text input bu... See more...
Hi  Sorry, I want to create an input (free text) on the field "JOBNAME" which is extracted via rex.  Is it possible?  Below input is working fine when I put a job name in the free_text input but when i give nothing or * in the  free_text input , it gives me no result.    <input type="text" token="free_text" searchWhenChanged="true"> <label>Free_Text</label> <default>*</default> <prefix>| where JOBNAME = "</prefix> <suffix>"</suffix> <initialValue>*</initialValue> </input>   Any way to create an input filter as a free text for the field JOBNAME ??  I am using Free text input because there are more than 500 jobs and in the dropdown it does not look good. 
Hi Team, I am trying to setup an alert if the count of errors are in range of  between 10 to19(more then 10 and less than 19).  for example: index=abc sourcetype=xyz "errors" only if count ... See more...
Hi Team, I am trying to setup an alert if the count of errors are in range of  between 10 to19(more then 10 and less than 19).  for example: index=abc sourcetype=xyz "errors" only if count >= 10 AND count <=19, should only trigger alert. please help thank you
I really need splunk to update my name. I have raised ticket twice and both time I was told 'it's not their job and please visit splunk support page', which ends up in an infinite loop. For context ... See more...
I really need splunk to update my name. I have raised ticket twice and both time I was told 'it's not their job and please visit splunk support page', which ends up in an infinite loop. For context I recently changed my offical name, e.g. name on my passport. Without updating it in my splunk profiles I won't be able to take exams as ID's don't match.
Hi  Is it possible to use any graph/visualization to show that in last 30 mins  INCOMING is greater than 0 and OUTGOING = 0 .     
They are being issued. For me it took two weeks to arrive, so the time varies.
When you say "seen in Splunk as json data which is not searchable", do you mean the data is not at all appearing in Splunk, or it is possible to use search to retrieve the logs but they do not extrac... See more...
When you say "seen in Splunk as json data which is not searchable", do you mean the data is not at all appearing in Splunk, or it is possible to use search to retrieve the logs but they do not extract fields? Splunk handles JSON logs straightforwardly so if the logs do reach splunk in proper JSON format then it should have no problem indexing the logs and even automatically extracting fields.
Given that you already have 5 minute counts, do you want a rolling 30 minutes i.e. 6 event window? If so, you could use something like this: | streamstats sum(*) as *_last_30mins window=6  
Paste the raw events into a codeblock e.g. {"timestamp":"2024-04-29 11:59:59","user":"ITWhisperer","Account":1234}
Thanks for the feedback, should I export the results of my searches as csv or some other way? Thanks
Hi  @diogofgm , permission are rights: we  use our domain account set to have admin rights on all Splunk hosts in our env. I performed other analysis and I found a strange things. Let me share with ... See more...
Hi  @diogofgm , permission are rights: we  use our domain account set to have admin rights on all Splunk hosts in our env. I performed other analysis and I found a strange things. Let me share with you another set of inputs. In the under analysis env, we have 4 indexers in cluster. Above them, we have 3 SH NOT in cluster and the fourth one, the one with ES. So, in a nutshell: 3x SH Splunk Core (NO SH Cluster) + 1 ES SH 4x IDX clustered Using btool, I checked indexes.conf deployed on Indexers cluster, and I found that, on all 4 IDXs, there are only 2 indexes.conf: $SPLUNK_HOME$/etc/apps/slave-apps/_cluster/local/indexes.conf $SPLUNK_HOME$/etc/system/default/indexes.conf As I expected, the one in default folder is the system provided one, not edited by who performed initial installation and setup (another company has done this, not us). So, I checked the one in _cluster and, as I expected, it is the one where all indexes created by previous admins has been put...except the one that give me the problem. I mean: inside $SPLUNK_HOME$/etc/apps/slave-apps/_cluster/local/indexes.conf I can find custom indexes set (they are 262) but NOT the one (pan_logs) that rise the issue. There is no trace of it on the indexers (at lease, in files I checked). So, I thought: hey, wait a minute, could it be deployed directly on SH? So, I checked indexes.conf on the SH where I can query successfull the index, but again I found no trace about it. It appear, let me say, like a "ghost" index: No trace of it on SH and IDX, but there is a SH able to query it.   
I am not sure where rex comes into it - you could set up a static drop down like this Label Value Incoming | where DIRECTION=="INCOMING" Outgoing | where DIRECTION=="OUTGOING" Both  ... See more...
I am not sure where rex comes into it - you could set up a static drop down like this Label Value Incoming | where DIRECTION=="INCOMING" Outgoing | where DIRECTION=="OUTGOING" Both   Then just place the token in your search after the DIRECTION eval | eval DIRECTION= case('JOBNAME' == "$VVF119P", "INCOMING" , 'JOBNAME' == "$VV537UP", "OUTGOING" , 1=1,"NA") $direction_selector_token$ | eval Diff=ENDED_TIME-STARTED_TIME  
From other forum post, you have probably seem that volunteers usually work better with sample anonymised representative events. Please can you share some events, preferable in a code block </>, so th... See more...
From other forum post, you have probably seem that volunteers usually work better with sample anonymised representative events. Please can you share some events, preferable in a code block </>, so that we have something to work with (to test our solutions before posting them)?