All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Drewprice, there's some conceptual and a logical errors in your search: at first you have to define a time period for the check, e.g. every 10 minutes, otherwise there's no sense to use _time i... See more...
Hi @Drewprice, there's some conceptual and a logical errors in your search: at first you have to define a time period for the check, e.g. every 10 minutes, otherwise there's no sense to use _time in your search. the second error is that you don't need to transform the timestamp in human readable. At least, but this is an interpretation of mine, why do you want to calculate the peek? usually it's calculated the amount of sent bytes in a period, and anyway you use the sum function so you don't calculate the peak (for the peak you should use max) So you should try something like this: if you want to trigger an alert if the amount of bytes in one minute is more than , you should run something like this: index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8 | timechart sum(sentbyte) AS count span=1m | where count>5000000 or index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8 | bin span=1m _time | stats sum(sentbyte) AS count BY _time | where count>5000000 Ciao. Giuseppe
Hi, I have a search that shows the output of traffic as sum(sentbyte) This is my search, names have been changed to protect the guilty: ________________________________________________ index=ne... See more...
Hi, I have a search that shows the output of traffic as sum(sentbyte) This is my search, names have been changed to protect the guilty: ________________________________________________ index=netfw host="firewall" srcname IN (host1,host2,host3...) action=allowed dstip=8.8.8.8 | eval mytime=strftime(_time,"%Y/%m/%d %H %M") | stats sum(sentbyte) by mytime ________________________________________________ The results show the peak per minute, which I can graph with a line chart, and they range up to 10,000,000. I have tried to set up the alerting when the sum(sentbyte) is over 5,000,000 but cannot get it to trigger. My alert is set to custom: | stats sum(sentbyte) by mytime > 5000000 I me be on the wrong track for what I am trying to do but have spent many hours going in circles with this one. Any help is greatly appreciated
I need drop down default taken as log1,log2,log3 As @dural_yyz hinted at, the actual solution depends very much on how you will use the input token in search.  If you want the comma delimited li... See more...
I need drop down default taken as log1,log2,log3 As @dural_yyz hinted at, the actual solution depends very much on how you will use the input token in search.  If you want the comma delimited list as part of IN function, you can just use appendpipe.   | inputlookup mylookup | appendpipe [stats values(log_group) as log_group | eval Name = "Any", log_group = mvjoin(log_group, ",")]   You get Name log_group Name 1 Log1 Name 2 log2 Name 3 log3 Any log1,log2,log3    
Although you can get rex to work to some extent, treating structure data such as JSON as string is not robust.  I always recommend changing to Splunk's tested builtin functions such as spath or fromj... See more...
Although you can get rex to work to some extent, treating structure data such as JSON as string is not robust.  I always recommend changing to Splunk's tested builtin functions such as spath or fromjson. If your event is JSON, Splunk should have given you the data field unless there's some serious problem with event parsing.  If the string snippet is part of a data field that contains compliant JSON, say data, just do | spath input=data If the snippet is not in a field yet, use rex to extract the entire compliant JSON, then use spath.  You will have much better data to work with.
We have using for windows servers index=windows and index=perfmon. For Linux servers using index=os . These servers having data memory utilization and CPU, performance data.
As @PickleRick said, this is not about parsing but about presentation, and that spath command that we usually use is not handling JSON keys containing dot (.) correctly because in SPL, as well as in ... See more...
As @PickleRick said, this is not about parsing but about presentation, and that spath command that we usually use is not handling JSON keys containing dot (.) correctly because in SPL, as well as in many other languages that flatten structured data use dot to represent hierarchy. But keys containing dot are not the only problem that makes @dtburrows3's solution so wonky.  The bigger problem is the data design.  It uses implied semantics about what represents a URL.  Implied semantics in structured data is generally unacceptable. (At higher level, this is abusing key names to represent data.)  If you have any influence with your developers, beg them to change data structure so key and data are completely separate. Thanks to @dtburrows3, I learned that fromjson (introduced in 9.0) is more robust than spath (from 7.0 or earlier), and learned the trick to leverage the evil dot in key name in order to single out actual data in abused structure, namely foreach *.*.  It is less robust but works for the limited dataset.  So, I offer a more semantic, hopefully less wonky solution.   | makeresults | eval _raw="{ \"a.com\": [ { \"yahoo.com\":\"10ms\",\"trans-id\": \"x1\"}, { \"google.com\":\"20ms\",\"trans-id\": \"x2\"} ], \"trans-id\":\"m1\", \"duration\":\"33ms\" }" ``` data emulation above ``` | table _* | fromjson _raw | rename duration as Duration, trans-id as Trans_id | foreach * [eval url = mvappend(url, if("<<FIELD>>" IN ("Duration", "Trans_id"), null, "<<FIELD>>"))] | mvexpand url ``` nothing in this data structure prevents multiple URLs ``` | foreach *.* [mvexpand <<FIELD>> | eval subkey = json_array_to_mv(json_keys('<<FIELD>>')) | eval sub_trans_id = json_extract('<<FIELD>>', "trans-id") | eval subdata = json_object() | eval subdata = mvmap(subkey, if(subkey == "trans-id", null(), json_set(subdata, "sub_url", subkey, "sub_duration", json_extract_exact('<<FIELD>>', subkey))))] | fromjson subdata | table _time Trans_id url Duration sub_duration sub_url sub_trans_id   The output is _time Trans_id url Duration sub_duration sub_url sub_trans_id 2024-01-19 13:01:54 m1 a.com 33ms 10ms yahoo.com x1 2024-01-19 13:01:54 m1 a.com 33ms 20ms google.com x2
And what data do you have in your Splunk? How should your Splunk know about all this?
Based on that log if suggest that you raise support case to splunk and continue with them.
Hi @sgabriel1962  As you noticed yourself, you're responding to an old thread regarding a relatively old and unsupported version of Splunk. So even if your problem seems similar, it is quite likely ... See more...
Hi @sgabriel1962  As you noticed yourself, you're responding to an old thread regarding a relatively old and unsupported version of Splunk. So even if your problem seems similar, it is quite likely that it's caused by different thing (especially that original one was supposed to be due to a but which should have been patched long ago). Instead of digging up an old thread, it's better to create a new one with a detailed description of your problem (and possibly a link to the old thread as a reference to something you'd found while looking for solutions but what may not be applicable to your situation).
Just search your data over a month period and calculate your statistics. But seriously - we have no way of knowing what data you have in your Splunk environment, what do "TPS" or "route codes" mean ... See more...
Just search your data over a month period and calculate your statistics. But seriously - we have no way of knowing what data you have in your Splunk environment, what do "TPS" or "route codes" mean in this context and what is it you want to get (especially that you're saying about some TPS parameter which suggests "something per second"; most probably transactions of some kind) but then talk about count - how would you want to "match' those TPS to count of something else? Provide some sample (anonymized if needed) data you have, be more precise about what you want to achieve (possibly including mockup results) and we can try to think about your problem.
Which endpoint are you pushing your events to? /raw or /event? The /event endpoint, unless called with ?auto_extract_timestamp=true parameter skips timestamp recognition completely. Anyway, if ther... See more...
Which endpoint are you pushing your events to? /raw or /event? The /event endpoint, unless called with ?auto_extract_timestamp=true parameter skips timestamp recognition completely. Anyway, if there is no timezone info contained within the timestamp itself, Splunk sets the timezone for parsing the timestamp according to those settings https://docs.splunk.com/Documentation/Splunk/latest/Data/Applytimezoneoffsetstotimestamps#Specify_time_zones_in_props.conf If you're sending to /raw endpoint you need to make sure you have proper timezone info set either for your whole forwarder where the HEC input is defined or for the particular sourcetype. If you're sending to the /event endpoint and you're not parsing the timestamp from the data, you need to make sure the proper timestamp (this time formatted as unix-timestamp) is being sent with the event or - if there is no timestamp info sent with the event (not _in_ the event - _with_ it), make sure that time (including timezone) is properly configured on the receiving forwarder.
How to get peak TPS stats for a month with the count of all route codes ?
But 'assuming local time' is ok if the server is GMT/UTC, right, then the user's timezone should adjust? This exact same sourcetype and configuration works at another installation, so something mu... See more...
But 'assuming local time' is ok if the server is GMT/UTC, right, then the user's timezone should adjust? This exact same sourcetype and configuration works at another installation, so something must be configured differently. Here is the example of it 'working' (attached) where the timestampStr is UTC, but the _time is showing as Central Standard Time, which is my user defined timezone.    
Those CSS examples would work quite well but are they supported in Dashboard Studio or only Classic Dashboards? In Dashboard Studio, it's not clear how you can use a different icons for different val... See more...
Those CSS examples would work quite well but are they supported in Dashboard Studio or only Classic Dashboards? In Dashboard Studio, it's not clear how you can use a different icons for different values in a single field.
Your timestamp raw feed does not have a time zone indicator.  In that case the indexer will assume local system time zone and apply that in most cases.  If our collection point (UF) is -4 and your in... See more...
Your timestamp raw feed does not have a time zone indicator.  In that case the indexer will assume local system time zone and apply that in most cases.  If our collection point (UF) is -4 and your indexing point (IDX) is -5 it will assume that the timestamp is also -5. A best practice is to always define your time zone and never allow Splunk to automagically assume based on the datetime.xml auto extractions. https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Propsconf#Timestamp_extraction_configuration  
Hi,  I am implementing a Splunk SOAR Connector and i was wondering if it is possible to write logs at different levels. There are different levels that can be configured on SystemHealth/Debugging bu... See more...
Hi,  I am implementing a Splunk SOAR Connector and i was wondering if it is possible to write logs at different levels. There are different levels that can be configured on SystemHealth/Debugging but the BaseConnector only has debug_print and error_print methods. How can I print INFO,  WARNING and TRACE logs on my connector? Thank Eduardo
I have a similar situation in my environment - making the changes to the restmap.conf prevents the App Launcher from loading  this is true  -  and I have version 9.1.2  where the fix must should have... See more...
I have a similar situation in my environment - making the changes to the restmap.conf prevents the App Launcher from loading  this is true  -  and I have version 9.1.2  where the fix must should have been fixed  
So...I have a HEC receiving JSON for phone calls using a custom sourcetype which parses calls from a field called timestampStr which looks like this: 2024-01-19 19:17:04.60313329 And uses TIME_FO... See more...
So...I have a HEC receiving JSON for phone calls using a custom sourcetype which parses calls from a field called timestampStr which looks like this: 2024-01-19 19:17:04.60313329 And uses TIME_FORMAT in the sourcetype with %Y-%m-%d %H:%M:%S.%9N And this sets _time to 2024-01-19 19:17:04.603 in the event. Which SEEMS RIGHT. However, if I then, as a user in Central time zone, ask for the calls 'in the last 15 minutes' (assuming I just made the call) it does not show up.  And, in fact, to find it, I have to use ALL_TIME (because the call seems to be 'in the future' based on my user, which feels dumb/weird to write because I know it's not technically true, but the best I can explain it). Here is what I mean (example): 1. i placed a call into the network at 1:17PM CENTRAL TIME (which is 19:17 UTC) 2. the application sent a json message of that call 3. it came in as above 4. i then ran a search in Splunk looking for that call in the LAST 15 MINUTES and it did not find it 5. however, i immediately asked for ALL_TIME and it did. I think my assumption is that if my USER SETTINGS are set to Central, it would 'correlate the calls' ok, but this appears not true or, rather, more likely, my understanding of what is going on is very poor. So I, once again, return to the community looking for answers.    
I haven't had much success with dynamically setting individual fields within an asset config. That being said, if you have a set list of URLs for that field, you can configure an asset per base_url y... See more...
I haven't had much success with dynamically setting individual fields within an asset config. That being said, if you have a set list of URLs for that field, you can configure an asset per base_url you need, and then pass in the asset name as a parameter.  You may also have some success if you edit the HTTP app itself and modify this functionality for your own use cases, but that's a bit more complicated.
You cannot do this with simple event search as you attempted.  To add fields (sometimes called "enrichment"), you need to use lookup command. (Or join with inputlookup and sacrifice performance.  But... See more...
You cannot do this with simple event search as you attempted.  To add fields (sometimes called "enrichment"), you need to use lookup command. (Or join with inputlookup and sacrifice performance.  But this doesn't apply in your case.)  Your question is really about wanting to match a wildcard at the beginning of a key, which lookup does not support.  Given your sample data, you don't seem to have a real choice.  So, you will have to take some performance penalty and perform string matches yourself. People (including myself) used to work around similar limitations in lookup with awkward mvzip-mvexpand-split sequences and the code is difficult to maintain.  Since 8.2, Splunk introduced a set of JSON functions that can represent data structure more expressively.  Here is one method:   | makeresults count=4 | streamstats count | eval number = case(count=1, 25, count=2, 39, count=3, 31, count=4, null()) | eval string1 = case(count=1, "I like blue berries", count=3, "The sea is blue", count=2, "black is all colors", count=4, "Theredsunisredhot") | table string1 | append [| inputlookup wildlookup.csv | tojson output_field=wildlookup | stats values(wildlookup) as wildlookup | eval wild = json_object() | foreach wildlookup mode=multivalue [ eval wild = json_set(wild, json_extract(<<ITEM>>, "colorkey"), <<ITEM>>)] | fields wild] | eventstats values(wild) as wild | where isnotnull(string1) | eval colors = json_keys(wild) | foreach colors mode=json_array [eval colorkey = mvappend(colorkey, if(match(string1, <<ITEM>>), <<ITEM>>, null()))] | mvexpand colorkey ``` in case of multiple matches ``` | foreach flagtype active [eval <<FIELD>> = json_extract(json_extract(wild, colorkey), "<<FIELD>>")] | eval flag = "KEYWORD FLAG" | table flagtype, flag, string1, colorkey   Note I stripped fields that are irrelevant to the resultant table.  I also made provisions to protect possible multiple color matches.  The output is flagtype flag string1 colorkey sticker KEYWORD FLAG I like blue berries blue   KEYWORD FLAG black is all colors   sticker KEYWORD FLAG The sea is blue blue tape KEYWORD FLAG Theredsunisredhot red Hope this helps.