All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you very much for the clarification. Yes, valid rows start with ##. And each event is what is inside each ##start_string and ##end_string block. From UI, is there any way to do the first st... See more...
Thank you very much for the clarification. Yes, valid rows start with ##. And each event is what is inside each ##start_string and ##end_string block. From UI, is there any way to do the first step and remove the rows that do not start with ## ? BR JAR
Have you considered federated searches? https://www.splunk.com/en_us/blog/platform/introducing-splunk-federated-search.html?locale=en_us  
| spath {}.orderTypesTotal{} output=orderTypesTotal | mvexpand orderTypesTotal | spath input=orderTypesTotal | stats sum(totalFailedTransactions) as totalFailedTransaction sum(totalSuccessfulTransact... See more...
| spath {}.orderTypesTotal{} output=orderTypesTotal | mvexpand orderTypesTotal | spath input=orderTypesTotal | stats sum(totalFailedTransactions) as totalFailedTransaction sum(totalSuccessfulTransactions) as totalSuccessfulTransactions sum(totalTransactions) as totalTransactions by orderType
| fieldformat value=replace(tostring(value,"commas"),","," ")
Hello Expert Splunk Community , I am struggling with a JSON extraction . Need help/advice on how to do this operation Data Sample :   [ { "orderTypesTotal": [ { "orderType": "Purchase", "total... See more...
Hello Expert Splunk Community , I am struggling with a JSON extraction . Need help/advice on how to do this operation Data Sample :   [ { "orderTypesTotal": [ { "orderType": "Purchase", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 0, "totalTransactions": 0 }, { "orderType": "Sell", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 0, "totalTransactions": 0 }, { "orderType": "Cancel", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ], "totalTransactions": [ { "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ] } ]     [ { "orderTypesTotal": [ { "orderType": "Purchase", "totalFailedTransactions": 10, "totalSuccessfulTransactions": 2, "totalTransactions": 12 }, { "orderType": "Sell", "totalFailedTransactions": 1, "totalSuccessfulTransactions": 2, "totalTransactions": 3 }, { "orderType": "Cancel", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ], "totalTransactions": [ { "totalFailedTransactions": 11, "totalSuccessfulTransactions": 5, "totalTransactions": 16 } ] } ]   I have the above event coming inside a field in _raw events . using json(field) i have validated that the above is a valid json . UseCase : I need to have the total of all the different ordertypes using totalFailedTransactions": , "totalSuccessfulTransactions": , "totalTransactions": numbers into a table .   totalFailedTransactions totalSuccessfulTransactions totalTransactions Purchase 10 2 12 Sell 1 2 3 Cancel 0 2 2   Thanks in advance! Sam  
Hi thank you for the reply, I tried that, and it works replacing the comma with a space, but the issue it's next when i click on the column to sort in a descending order, and the numbers are not s... See more...
Hi thank you for the reply, I tried that, and it works replacing the comma with a space, but the issue it's next when i click on the column to sort in a descending order, and the numbers are not sorted correctly, because some of them are string and others are numbers, and I tried the conversion "tonumber()" 
Thank you, let me check.  
It would be nice to get the real log format in the first phase not after 1st version has resolved! Do all valid log rows starting with ##? If so then you should add transforms.conf which drop away o... See more...
It would be nice to get the real log format in the first phase not after 1st version has resolved! Do all valid log rows starting with ##? If so then you should add transforms.conf which drop away other lines. If there is not any way to recognise those without looking ##start_string and ##end_string then you probably must write some preprocessing or your own modular input. Splunk's normal input processing handling those lines one by one and it cannot keep track other lines and is there happening something or not.
Have a look at the transaction command option e.g. keeporphans and keepevicted to see if they will give what you need https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transaction ... See more...
Have a look at the transaction command option e.g. keeporphans and keepevicted to see if they will give what you need https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transaction  
The firewall request from the splunk server (e.g. HF) should be both to the actual database physical host/IPs and  the SCAN host/IP server. ( if using this in the tns listener files). In a RAC setup ... See more...
The firewall request from the splunk server (e.g. HF) should be both to the actual database physical host/IPs and  the SCAN host/IP server. ( if using this in the tns listener files). In a RAC setup there may be multiple IP/host, so you would need access to all of them from splunk server. Another test would be to run a nslookup on the scan ip/hostname/FQDN and if it return multiple IPs/host, submit firewall req for them as well.
Since you have given your problem statement in generic terms, I will answer in the same manner. You could look to use the eventstats command to add / copy the exception indicator to all events with t... See more...
Since you have given your problem statement in generic terms, I will answer in the same manner. You could look to use the eventstats command to add / copy the exception indicator to all events with the corresponding request id. Then you can filter the event by whether the exception indicator is present.
Try changing the other search commands to their corresponding where commands
Hi, I need to find errors/exceptions which has been raised within a timestamp and as per the request_id field mentioned in the logs(with every row) , need to fetch relevant logs in splunk  for that ... See more...
Hi, I need to find errors/exceptions which has been raised within a timestamp and as per the request_id field mentioned in the logs(with every row) , need to fetch relevant logs in splunk  for that request_id and send this link to slack channel. I am able to fetch all the errors/exception within timestamp and able to send to slack but I am not able to generate the relevant logs for the request_id mentioned with error/exception as it is dynamic in nature. I am new to splunk so would like to understand, is this possible? if yes then could you please share relevant documentation so that I can understand it better.   Thank you so much.
Thanks Rich - logical when you think about it. Works a treat - thank you
Support is correct that documented behavior is not a bug.  That explains your original observation about tonumber. Your observation about typeof is also normal.  Imagine you are the interpreter.  In... See more...
Support is correct that documented behavior is not a bug.  That explains your original observation about tonumber. Your observation about typeof is also normal.  Imagine you are the interpreter.  In typeof("NaN"), you are given a string literal.  Of course you say that's of type String.  In typeof(num), you are given a variable whose value is documented as a number.  You say that's of type Number.
Hi! Filtering data from an amount of hosts looking for downtime durations. I get a "forensic" use view with this search string: index=myindex host=* | rex "to state\s(?<mystate>.*)" | search my... See more...
Hi! Filtering data from an amount of hosts looking for downtime durations. I get a "forensic" use view with this search string: index=myindex host=* | rex "to state\s(?<mystate>.*)" | search mystate="DOWN " OR mystate="UP | transaction by host startswith=mystate="DOWN " endswith=mystate="*UP " | table host,duration,_time | sort by duration | reverse ...where I REX for the specific patterns of "to state " (host transition into another state, in this example "DOWN" or "UP"), I had do do another "search" to get only the specific ones as there are more than DOWN/UP states (due to my anonymization of the data). I then can retrieve the duration between transitions using "duration" and sorting it as I please. My question - if I'd like to look into ongoing, "at-this-moment-active" hosts in state "DOWN" ie. replace "endswith" with a nominal time value ("NOW"). Where there yet has not been any "endswith" match, just counting the duration from "startswith" to the present moment - any tips on how I can formulate that properly?
So, I've been talking to Splunk support, which directed me to the documentation at SearchReference/Eval  that kind of mentions that NaN is special, and also pointed to typeof() as alternative. Initi... See more...
So, I've been talking to Splunk support, which directed me to the documentation at SearchReference/Eval  that kind of mentions that NaN is special, and also pointed to typeof() as alternative. Initially, this seemed like a good idea, but unfortunately typeof() is even more interesting: | makeresults | eval t=typeof("NaN") | eval num="NaN" | eval tnum=typeof(num) ...returns t = String tnum = Number Oh well....?  
This is a task for spath and JSON functions, not for rex.  Important: Do not treat structured data as strings.  But most importantly, if there is a worse data design in JSON than this, I haven't seen... See more...
This is a task for spath and JSON functions, not for rex.  Important: Do not treat structured data as strings.  But most importantly, if there is a worse data design in JSON than this, I haven't seen it before. (Trust me, I've seen many bad JSON designs.)  Your developers really go to lengths to show the world how lazy they can be.  After lengthy reverse engineering, I cannot decide whether this is the poorest transliteration of an existing data table into JSON, or the worst imagined construction of a data table in JSON.  Using two separate arrays to describe row and column saves a little bit of space, but unlike in a SQL database, this makes search less efficient. (Unless the application reads the whole thing and reconstruct data into an in-memory SQL database.  It also wastes text to describe data type in each column when you JSON format has enough datatypes to include strings and numbers. Enough ranting.  If you have any influence over your developers, make them change the event format to something like this instead:   [{"id":"BM0077","event":35602782,"delays":3043.01,"drops":0},{"id":"BM0267","event":86497692,"delays":1804.55,"drops":44},{"id":"BM059","event":1630092,"delays":5684.5,"drops":0},{"id":"BM1604","event":2920978,"delays":4959.1,"drops":2},{"id":"BM1612","event":2141607,"delays":5623.3,"drops":6},{"id":"BM1834","event":74963092,"delays":2409,"drops":8},{"id":"BM2870","event":41825122,"delays":2545.34,"drops":7}]   With properly designed JSON data, extraction can be simple as    ``` extract from well-designed JSON ``` | spath path={} | mvexpand {} | spath input={} | fields - _* {}   Output will your sample data (see emulation below) will be delays drops event id 3043.01 0 35602782 BM0077 1804.55 44 86497692 BM0267 5684.5 0 1630092 BM059 4959.1 2 2920978 BM1604 5623.3 6 2141607 BM1612 2409 8 74963092 BM1834 2545.34 7 41825122 BM2870  This is how to reach good JSON structure from the sample's bad structure:   ``` transform from bad JSON to well-designed JSON ``` | spath path={} | mvexpand {} | fromjson {} | mvexpand rows | eval idx = mvrange(0, mvcount(columns)) | eval data = json_object() | foreach idx mode=multivalue [eval row = mvindex(json_array_to_mv(rows), <<ITEM>>), data = json_set(data, json_extract(mvindex(columns, <<ITEM>>), "text"), row)] | stats values(data) as _raw | eval _raw = mv_to_json_array(_raw, true())   But before your developer can change their mind, you can do one of the following. 1. String together the bad-JSON-to-good-JSON transformation and normal extraction   ``` transform, then extract ``` ``` transform from bad JSON to well-designed JSON ``` | spath path={} | mvexpand {} | fromjson {} | mvexpand rows | eval idx = mvrange(0, mvcount(columns)) | eval data = json_object() | foreach idx mode=multivalue [eval row = mvindex(json_array_to_mv(rows), <<ITEM>>), data = json_set(data, json_extract(mvindex(columns, <<ITEM>>), "text"), row)] | stats values(data) as _raw | eval _raw = mv_to_json_array(_raw, true()) ``` extract from well-designed JSON ``` | spath path={} | mvexpand {} | spath input={} | fields - _* {}   2. Transform the poorly designed JSON into table format with a little help from kv aka extract, like this:   ``` directly handle bad design ``` | spath path={} | mvexpand {} | fromjson {} | fields - _* {} type | mvexpand rows | eval idx = mvrange(0, mvcount(columns)) ``` the above is the same as transformation ``` | foreach idx mode=multivalue [eval _raw = mvappend(_raw, json_extract(mvindex(columns, <<ITEM>>), "text") . "=" . mvindex(json_array_to_mv(rows), <<ITEM>>))] | fields - rows columns idx | extract   This will give you the same output as above. Here is a data emulation you can play with the above methods and compare with real data   | makeresults | eval _raw = "[{\"columns\":[{\"text\":\"id\",\"type\":\"string\"},{\"text\":\"event\",\"type\":\"number\"},{\"text\":\"delays\",\"type\":\"number\"},{\"text\":\"drops\",\"type\":\"number\"}],\"rows\":[[\"BM0077\",35602782,3043.01,0],[\"BM1604\",2920978,4959.1,2],[\"BM1612\",2141607,5623.3,6],[\"BM2870\",41825122,2545.34,7],[\"BM1834\",74963092,2409.0,8],[\"BM0267\",86497692,1804.55,44],[\"BM059\",1630092,5684.5,0]],\"type\":\"table\"}]" ``` data emulation above ```   A final note: spath has some difficulty handling array of arrays, so I used fromjson (available since 9.0) in one filter.
Thank you @marnall  However, in my log (which has some line between event end markers and the next event start), something is wrong. Some info extra info ##start_string ##time = 1711292017 ##Field2... See more...
Thank you @marnall  However, in my log (which has some line between event end markers and the next event start), something is wrong. Some info extra info ##start_string ##time = 1711292017 ##Field2 = 12 ##Field3 = field_value ##Field4 = somethingelse ##Field8 = 1 ##Field7 = 12 ##Field6 = 1 ##Field5 = ##end_string Some info more info extra info ##start_string ##time = 1711291017 ##Field2 = 12 ##Field3 = field_value2 ##Field4 = somethingelse3 ##Field8 = 14 ##Field7 = 12 ##Field6 = 15 ##Field5 = ##end_string SOme info more info info extra info ##start_string ##time = 1711282017 ##Field2 = 12 ##Field3 = asrsar ##Field4 = somethingelsec ##Field8 = 1 ##Field7 = 12 ##end_string Some info extra info     Some idea to delimit events between the markers? ##start_string ##end_string   BR JAR  
  Its such a mystery! I think i tried everything already, each time the query has the eval command there is no results coming back.