All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This is a very very old thread. It's highly unlikely that its participants are still on this forum. If you have a similar problem, just post a question with a description of your issue in a new thre... See more...
This is a very very old thread. It's highly unlikely that its participants are still on this forum. If you have a similar problem, just post a question with a description of your issue in a new thread, possibly putting a link to this thread for reference.
And how did you come up with the range() stats function? This function is for something completely different - it tells you what's the difference between lowest and highest value in your result set w... See more...
And how did you come up with the range() stats function? This function is for something completely different - it tells you what's the difference between lowest and highest value in your result set whereas you want to count things. The range() function is completely unsuited for this. You should be doing count by Account_Name. In order to make it over sliding window, you need to use streamstats with a proper time window. <your_initial_search> | streamstats time_window=10m count by Account_Name This will give you counts of logins over 10 minute windows. From this you'll be able to pick the one with the highest count. For example with | sort - count | head  
I want to search for an Account_Name that has the maximum number of login attempts within a span of 10 minutes with range() function.....   I don't know how can i provide the parameters to this fun... See more...
I want to search for an Account_Name that has the maximum number of login attempts within a span of 10 minutes with range() function.....   I don't know how can i provide the parameters to this function.... some help will be appreciated!
Where exactly do you see this error?
Hi @rsreese , sorry, I didn't notice the "Dashboard Studio" label! No it works only in Dashboard Classic: I haven't started using Dashboard Studio yet because it still can't do everything I can do ... See more...
Hi @rsreese , sorry, I didn't notice the "Dashboard Studio" label! No it works only in Dashboard Classic: I haven't started using Dashboard Studio yet because it still can't do everything I can do with the Classic version. Ciao. Giuseppe
Hi @Drewprice, there's some conceptual and a logical errors in your search: at first you have to define a time period for the check, e.g. every 10 minutes, otherwise there's no sense to use _time i... See more...
Hi @Drewprice, there's some conceptual and a logical errors in your search: at first you have to define a time period for the check, e.g. every 10 minutes, otherwise there's no sense to use _time in your search. the second error is that you don't need to transform the timestamp in human readable. At least, but this is an interpretation of mine, why do you want to calculate the peek? usually it's calculated the amount of sent bytes in a period, and anyway you use the sum function so you don't calculate the peak (for the peak you should use max) So you should try something like this: if you want to trigger an alert if the amount of bytes in one minute is more than , you should run something like this: index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8 | timechart sum(sentbyte) AS count span=1m | where count>5000000 or index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8 | bin span=1m _time | stats sum(sentbyte) AS count BY _time | where count>5000000 Ciao. Giuseppe
Hi, I have a search that shows the output of traffic as sum(sentbyte) This is my search, names have been changed to protect the guilty: ________________________________________________ index=ne... See more...
Hi, I have a search that shows the output of traffic as sum(sentbyte) This is my search, names have been changed to protect the guilty: ________________________________________________ index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8 | eval mytime=strftime(_time,"%Y/%m/%d %H %M") | stats sum(sentbyte) by mytime ________________________________________________ The results show the peak per minute, which I can graph with a line chart, and they range up to 10,000,000. I have tried to set up the alerting when the sum(sentbyte) is over 5,000,000 but cannot get it to trigger. My alert is set to custom: | stats sum(sentbyte) by mytime > 5000000 I me be on the wrong track for what I am trying to do but have spent many hours going in circles with this one. Any help is greatly appreciated
I need drop down default taken as log1,log2,log3 As @dural_yyz hinted at, the actual solution depends very much on how you will use the input token in search.  If you want the comma delimited li... See more...
I need drop down default taken as log1,log2,log3 As @dural_yyz hinted at, the actual solution depends very much on how you will use the input token in search.  If you want the comma delimited list as part of IN function, you can just use appendpipe.   | inputlookup mylookup | appendpipe [stats values(log_group) as log_group | eval Name = "Any", log_group = mvjoin(log_group, ",")]   You get Name log_group Name 1 Log1 Name 2 log2 Name 3 log3 Any log1,log2,log3    
Although you can get rex to work to some extent, treating structure data such as JSON as string is not robust.  I always recommend changing to Splunk's tested builtin functions such as spath or fromj... See more...
Although you can get rex to work to some extent, treating structure data such as JSON as string is not robust.  I always recommend changing to Splunk's tested builtin functions such as spath or fromjson. If your event is JSON, Splunk should have given you the data field unless there's some serious problem with event parsing.  If the string snippet is part of a data field that contains compliant JSON, say data, just do | spath input=data If the snippet is not in a field yet, use rex to extract the entire compliant JSON, then use spath.  You will have much better data to work with.
We have using for windows servers index=windows and index=perfmon. For Linux servers using index=os . These servers having data memory utilization and CPU, performance data.
As @PickleRick said, this is not about parsing but about presentation, and that spath command that we usually use is not handling JSON keys containing dot (.) correctly because in SPL, as well as in ... See more...
As @PickleRick said, this is not about parsing but about presentation, and that spath command that we usually use is not handling JSON keys containing dot (.) correctly because in SPL, as well as in many other languages that flatten structured data use dot to represent hierarchy. But keys containing dot are not the only problem that makes @dtburrows3's solution so wonky.  The bigger problem is the data design.  It uses implied semantics about what represents a URL.  Implied semantics in structured data is generally unacceptable. (At higher level, this is abusing key names to represent data.)  If you have any influence with your developers, beg them to change data structure so key and data are completely separate. Thanks to @dtburrows3, I learned that fromjson (introduced in 9.0) is more robust than spath (from 7.0 or earlier), and learned the trick to leverage the evil dot in key name in order to single out actual data in abused structure, namely foreach *.*.  It is less robust but works for the limited dataset.  So, I offer a more semantic, hopefully less wonky solution.   | makeresults | eval _raw="{ \"a.com\": [ { \"yahoo.com\":\"10ms\",\"trans-id\": \"x1\"}, { \"google.com\":\"20ms\",\"trans-id\": \"x2\"} ], \"trans-id\":\"m1\", \"duration\":\"33ms\" }" ``` data emulation above ``` | table _* | fromjson _raw | rename duration as Duration, trans-id as Trans_id | foreach * [eval url = mvappend(url, if("<<FIELD>>" IN ("Duration", "Trans_id"), null, "<<FIELD>>"))] | mvexpand url ``` nothing in this data structure prevents multiple URLs ``` | foreach *.* [mvexpand <<FIELD>> | eval subkey = json_array_to_mv(json_keys('<<FIELD>>')) | eval sub_trans_id = json_extract('<<FIELD>>', "trans-id") | eval subdata = json_object() | eval subdata = mvmap(subkey, if(subkey == "trans-id", null(), json_set(subdata, "sub_url", subkey, "sub_duration", json_extract_exact('<<FIELD>>', subkey))))] | fromjson subdata | table _time Trans_id url Duration sub_duration sub_url sub_trans_id   The output is _time Trans_id url Duration sub_duration sub_url sub_trans_id 2024-01-19 13:01:54 m1 a.com 33ms 10ms yahoo.com x1 2024-01-19 13:01:54 m1 a.com 33ms 20ms google.com x2
And what data do you have in your Splunk? How should your Splunk know about all this?
Based on that log if suggest that you raise support case to splunk and continue with them.
Hi @sgabriel1962  As you noticed yourself, you're responding to an old thread regarding a relatively old and unsupported version of Splunk. So even if your problem seems similar, it is quite likely ... See more...
Hi @sgabriel1962  As you noticed yourself, you're responding to an old thread regarding a relatively old and unsupported version of Splunk. So even if your problem seems similar, it is quite likely that it's caused by different thing (especially that original one was supposed to be due to a but which should have been patched long ago). Instead of digging up an old thread, it's better to create a new one with a detailed description of your problem (and possibly a link to the old thread as a reference to something you'd found while looking for solutions but what may not be applicable to your situation).
Just search your data over a month period and calculate your statistics. But seriously - we have no way of knowing what data you have in your Splunk environment, what do "TPS" or "route codes" mean ... See more...
Just search your data over a month period and calculate your statistics. But seriously - we have no way of knowing what data you have in your Splunk environment, what do "TPS" or "route codes" mean in this context and what is it you want to get (especially that you're saying about some TPS parameter which suggests "something per second"; most probably transactions of some kind) but then talk about count - how would you want to "match' those TPS to count of something else? Provide some sample (anonymized if needed) data you have, be more precise about what you want to achieve (possibly including mockup results) and we can try to think about your problem.
Which endpoint are you pushing your events to? /raw or /event? The /event endpoint, unless called with ?auto_extract_timestamp=true parameter skips timestamp recognition completely. Anyway, if ther... See more...
Which endpoint are you pushing your events to? /raw or /event? The /event endpoint, unless called with ?auto_extract_timestamp=true parameter skips timestamp recognition completely. Anyway, if there is no timezone info contained within the timestamp itself, Splunk sets the timezone for parsing the timestamp according to those settings https://docs.splunk.com/Documentation/Splunk/latest/Data/Applytimezoneoffsetstotimestamps#Specify_time_zones_in_props.conf If you're sending to /raw endpoint you need to make sure you have proper timezone info set either for your whole forwarder where the HEC input is defined or for the particular sourcetype. If you're sending to the /event endpoint and you're not parsing the timestamp from the data, you need to make sure the proper timestamp (this time formatted as unix-timestamp) is being sent with the event or - if there is no timestamp info sent with the event (not _in_ the event - _with_ it), make sure that time (including timezone) is properly configured on the receiving forwarder.
How to get peak TPS stats for a month with the count of all route codes ?
But 'assuming local time' is ok if the server is GMT/UTC, right, then the user's timezone should adjust? This exact same sourcetype and configuration works at another installation, so something mu... See more...
But 'assuming local time' is ok if the server is GMT/UTC, right, then the user's timezone should adjust? This exact same sourcetype and configuration works at another installation, so something must be configured differently. Here is the example of it 'working' (attached) where the timestampStr is UTC, but the _time is showing as Central Standard Time, which is my user defined timezone.    
Those CSS examples would work quite well but are they supported in Dashboard Studio or only Classic Dashboards? In Dashboard Studio, it's not clear how you can use a different icons for different val... See more...
Those CSS examples would work quite well but are they supported in Dashboard Studio or only Classic Dashboards? In Dashboard Studio, it's not clear how you can use a different icons for different values in a single field.
Your timestamp raw feed does not have a time zone indicator.  In that case the indexer will assume local system time zone and apply that in most cases.  If our collection point (UF) is -4 and your in... See more...
Your timestamp raw feed does not have a time zone indicator.  In that case the indexer will assume local system time zone and apply that in most cases.  If our collection point (UF) is -4 and your indexing point (IDX) is -5 it will assume that the timestamp is also -5. A best practice is to always define your time zone and never allow Splunk to automagically assume based on the datetime.xml auto extractions. https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Propsconf#Timestamp_extraction_configuration