All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks for your quick response . I tried using spath aswell . But it seems that the field is not getting extracted in between , as the error suggests .. Field 'orderTypesTotal' does not exist in... See more...
Thanks for your quick response . I tried using spath aswell . But it seems that the field is not getting extracted in between , as the error suggests .. Field 'orderTypesTotal' does not exist in the data. Do you think an extracted json would have an issue where as a raw json would work with spath ? As my json payload is created only after adding extraction via a regex on a raw event . 
Hello, I'm facing a problem with my lookup command. Here is the context : I'v 1 csv : pattern type *ABC* 1 *DEF* 2 *xxx* 3 And logs with "url". Ex : "xxxxabcxxxxx.google.co... See more...
Hello, I'm facing a problem with my lookup command. Here is the context : I'v 1 csv : pattern type *ABC* 1 *DEF* 2 *xxx* 3 And logs with "url". Ex : "xxxxabcxxxxx.google.com" I need to search if, in my url field of my log, all the possibilities of my lookup are present. If yes, how much matches with this field. My expected result is : url type count(type) xxxxabcxxxxx.google.com 1 3 2   How can i do ? -"| lookup" command don't take into account the "*" symbol. Only space or comma with "WIDLCARD" config. -"| inputlookup" command works but can't display the field "type" because it only exists in my csv. So, i can't count either. Thank's for your answers
@ITWhisperer  Now code is working, I have modified it in a dashboard.  Thanks for your throughout genius help.
Did u get to the solution for this. Running version 4.4 and have the exact issue.   Thanks
Well, Splunk doesn't treat inf and -inf, mentioned in that same section, as a number either. Anyways, I need to add additional logic to sanitize inputs that might have fields with the text "NaN" (do... See more...
Well, Splunk doesn't treat inf and -inf, mentioned in that same section, as a number either. Anyways, I need to add additional logic to sanitize inputs that might have fields with the text "NaN" (does occasionally happen when the source is a SQL query) either way - for most purposes it just isn't a number, and tends to cause problems in further processing.
Federated searching is one possible approach. Another is to spin up a "central" search head and add all your distributed peers/clusters as search peers. Probably the federated search approach is easi... See more...
Federated searching is one possible approach. Another is to spin up a "central" search head and add all your distributed peers/clusters as search peers. Probably the federated search approach is easier to maintain in the long run.
You can use SEDCMD to remove all lines not beginning with two hashes. Something like SEDCMD-remove-unhashsed = s/^([^#]|#[^#]).*$// (Haven't tested it though, might need some tweaking).
Hi guys, I did this ant it worked in replacing the comma, thank you.  The N/A is in case i'ts empty. | eval value= if(value!="N/A", replace(tostring(value,"commas"),","," "),value) But the th... See more...
Hi guys, I did this ant it worked in replacing the comma, thank you.  The N/A is in case i'ts empty. | eval value= if(value!="N/A", replace(tostring(value,"commas"),","," "),value) But the thing is, I can't order it correctly. 
index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_... See more...
index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count | where like(LocationQualifiedName, "%/Aisle%Entry%") | strcat "raw" "," location group_name | where like(LocationQualifiedName,"%/Aisle%Entry%") OR like(LocationQualifiedName,"%/Aisle%Exit%") | strcat "raw" "," location group_name | timechart sum(count) as cnt by location
@ITWhisperer  I have used below code to obtain token results in macros ?Please provide your suggestion, is there any changes need ?  <change> <eval token="time.earliest_epoch">if('earliest... See more...
@ITWhisperer  I have used below code to obtain token results in macros ?Please provide your suggestion, is there any changes need ?  <change> <eval token="time.earliest_epoch">if('earliest'="",0,if(isnum(strptime('earliest', "%s")),'earliest',relative_time(now(),'earliest')))</eval> <eval token="time.latest_epoch">if(isnum(strptime('latest', "%s")),'latest',relative_time(now(),'latest'))</eval> <eval token="macro_token">if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 2592000, "throughput_macro_summary_1d",if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 86400, "throughput_macro_summary_1h","throughput_macro_raw"))</eval> <eval token="form.span_token">if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 2592000, "d", if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 86400, "h", $form.span_token$))</eval> </change>  
Thank you very much for the clarification. Yes, valid rows start with ##. And each event is what is inside each ##start_string and ##end_string block. From UI, is there any way to do the first st... See more...
Thank you very much for the clarification. Yes, valid rows start with ##. And each event is what is inside each ##start_string and ##end_string block. From UI, is there any way to do the first step and remove the rows that do not start with ## ? BR JAR
Have you considered federated searches? https://www.splunk.com/en_us/blog/platform/introducing-splunk-federated-search.html?locale=en_us  
| spath {}.orderTypesTotal{} output=orderTypesTotal | mvexpand orderTypesTotal | spath input=orderTypesTotal | stats sum(totalFailedTransactions) as totalFailedTransaction sum(totalSuccessfulTransact... See more...
| spath {}.orderTypesTotal{} output=orderTypesTotal | mvexpand orderTypesTotal | spath input=orderTypesTotal | stats sum(totalFailedTransactions) as totalFailedTransaction sum(totalSuccessfulTransactions) as totalSuccessfulTransactions sum(totalTransactions) as totalTransactions by orderType
| fieldformat value=replace(tostring(value,"commas"),","," ")
Hello Expert Splunk Community , I am struggling with a JSON extraction . Need help/advice on how to do this operation Data Sample :   [ { "orderTypesTotal": [ { "orderType": "Purchase", "total... See more...
Hello Expert Splunk Community , I am struggling with a JSON extraction . Need help/advice on how to do this operation Data Sample :   [ { "orderTypesTotal": [ { "orderType": "Purchase", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 0, "totalTransactions": 0 }, { "orderType": "Sell", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 0, "totalTransactions": 0 }, { "orderType": "Cancel", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ], "totalTransactions": [ { "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ] } ]     [ { "orderTypesTotal": [ { "orderType": "Purchase", "totalFailedTransactions": 10, "totalSuccessfulTransactions": 2, "totalTransactions": 12 }, { "orderType": "Sell", "totalFailedTransactions": 1, "totalSuccessfulTransactions": 2, "totalTransactions": 3 }, { "orderType": "Cancel", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ], "totalTransactions": [ { "totalFailedTransactions": 11, "totalSuccessfulTransactions": 5, "totalTransactions": 16 } ] } ]   I have the above event coming inside a field in _raw events . using json(field) i have validated that the above is a valid json . UseCase : I need to have the total of all the different ordertypes using totalFailedTransactions": , "totalSuccessfulTransactions": , "totalTransactions": numbers into a table .   totalFailedTransactions totalSuccessfulTransactions totalTransactions Purchase 10 2 12 Sell 1 2 3 Cancel 0 2 2   Thanks in advance! Sam  
Hi thank you for the reply, I tried that, and it works replacing the comma with a space, but the issue it's next when i click on the column to sort in a descending order, and the numbers are not s... See more...
Hi thank you for the reply, I tried that, and it works replacing the comma with a space, but the issue it's next when i click on the column to sort in a descending order, and the numbers are not sorted correctly, because some of them are string and others are numbers, and I tried the conversion "tonumber()" 
Thank you, let me check.  
It would be nice to get the real log format in the first phase not after 1st version has resolved! Do all valid log rows starting with ##? If so then you should add transforms.conf which drop away o... See more...
It would be nice to get the real log format in the first phase not after 1st version has resolved! Do all valid log rows starting with ##? If so then you should add transforms.conf which drop away other lines. If there is not any way to recognise those without looking ##start_string and ##end_string then you probably must write some preprocessing or your own modular input. Splunk's normal input processing handling those lines one by one and it cannot keep track other lines and is there happening something or not.
Have a look at the transaction command option e.g. keeporphans and keepevicted to see if they will give what you need https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transaction ... See more...
Have a look at the transaction command option e.g. keeporphans and keepevicted to see if they will give what you need https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transaction