All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Good morning, I hope you can help me, we maintain an infrastructure with splunk enterprise with SIEM and we must forward the security events to an elastic and kafka, I would like to know how I could... See more...
Good morning, I hope you can help me, we maintain an infrastructure with splunk enterprise with SIEM and we must forward the security events to an elastic and kafka, I would like to know how I could forward the events and if this will consume license.
Hello splunk community,  I have this query but I would also like to retrieve the index to which the sourcetype belongs index=_internal splunk_server=* source=*splunkd.log* sourcetype=splunkd... See more...
Hello splunk community,  I have this query but I would also like to retrieve the index to which the sourcetype belongs index=_internal splunk_server=* source=*splunkd.log* sourcetype=splunkd (component=AggregatorMiningProcessor OR component=LineBreakingProcessor OR component=DateParserVerbose OR component=MetricSchemaProcessor OR component=MetricsProcessor) (log_level=WARN OR log_level=ERROR OR log_level=FATAL) | rex field=event_message "\d*\|(?<st>[\w\d:-]*)\|\d*" | eval data_sourcetype=coalesce(data_sourcetype, st) | rename data_sourcetype as sourcetype | table sourcetype event_message component thread_name _time _raw | stats first(event_message) as event_message by sourcetype component any ideas ? thx in advance
If your JSON was already in a field, you could have used the input parameter to spath (this defaults to _raw) | spath input=<your field> ...
The difference is that with SEDCMD you can "blank" part of a multiline event. If you send to nullQueue, you'll discard whole event.
eval'ed my extracted payload to _raw and voila , it works !!! Thanks a lot for your time and expertise !
Then I propose you to use transforms.conf and send those lines into dev null. There are quite many examples on community and also on docs. See e.g. https://community.splunk.com/t5/Getting-Data-In/sen... See more...
Then I propose you to use transforms.conf and send those lines into dev null. There are quite many examples on community and also on docs. See e.g. https://community.splunk.com/t5/Getting-Data-In/sending-specific-events-to-nullqueue-using-props-amp-amp/m-p/660688 just replace that REGEX to match your line or beginning of your line. Basically SEDCMD do almost same. It just clears that line but it didn't remove it. Basically there are sill "empty" line on your log events, not removed line.
Fieldformat is just to keep those values as numbers instead of covert those to string as happened with eval.
If your problem is resolved, then please click the "Accept as Solution" button to help future readers.
That is difficult to determine since you haven't shared your raw event nor how you have extracted the JSON part.
Thanks for your quick response . I tried using spath aswell . But it seems that the field is not getting extracted in between , as the error suggests .. Field 'orderTypesTotal' does not exist in... See more...
Thanks for your quick response . I tried using spath aswell . But it seems that the field is not getting extracted in between , as the error suggests .. Field 'orderTypesTotal' does not exist in the data. Do you think an extracted json would have an issue where as a raw json would work with spath ? As my json payload is created only after adding extraction via a regex on a raw event . 
Hello, I'm facing a problem with my lookup command. Here is the context : I'v 1 csv : pattern type *ABC* 1 *DEF* 2 *xxx* 3 And logs with "url". Ex : "xxxxabcxxxxx.google.co... See more...
Hello, I'm facing a problem with my lookup command. Here is the context : I'v 1 csv : pattern type *ABC* 1 *DEF* 2 *xxx* 3 And logs with "url". Ex : "xxxxabcxxxxx.google.com" I need to search if, in my url field of my log, all the possibilities of my lookup are present. If yes, how much matches with this field. My expected result is : url type count(type) xxxxabcxxxxx.google.com 1 3 2   How can i do ? -"| lookup" command don't take into account the "*" symbol. Only space or comma with "WIDLCARD" config. -"| inputlookup" command works but can't display the field "type" because it only exists in my csv. So, i can't count either. Thank's for your answers
@ITWhisperer  Now code is working, I have modified it in a dashboard.  Thanks for your throughout genius help.
Did u get to the solution for this. Running version 4.4 and have the exact issue.   Thanks
Well, Splunk doesn't treat inf and -inf, mentioned in that same section, as a number either. Anyways, I need to add additional logic to sanitize inputs that might have fields with the text "NaN" (do... See more...
Well, Splunk doesn't treat inf and -inf, mentioned in that same section, as a number either. Anyways, I need to add additional logic to sanitize inputs that might have fields with the text "NaN" (does occasionally happen when the source is a SQL query) either way - for most purposes it just isn't a number, and tends to cause problems in further processing.
Federated searching is one possible approach. Another is to spin up a "central" search head and add all your distributed peers/clusters as search peers. Probably the federated search approach is easi... See more...
Federated searching is one possible approach. Another is to spin up a "central" search head and add all your distributed peers/clusters as search peers. Probably the federated search approach is easier to maintain in the long run.
You can use SEDCMD to remove all lines not beginning with two hashes. Something like SEDCMD-remove-unhashsed = s/^([^#]|#[^#]).*$// (Haven't tested it though, might need some tweaking).
Hi guys, I did this ant it worked in replacing the comma, thank you.  The N/A is in case i'ts empty. | eval value= if(value!="N/A", replace(tostring(value,"commas"),","," "),value) But the th... See more...
Hi guys, I did this ant it worked in replacing the comma, thank you.  The N/A is in case i'ts empty. | eval value= if(value!="N/A", replace(tostring(value,"commas"),","," "),value) But the thing is, I can't order it correctly. 
index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_... See more...
index="tput_summary" sourcetype="tput_summary_1d" | bin _time span="h" | table + _time LocationQualifiedName location date_hour date_mday date_minute date_month date_month date_second date_wday date_year count | where like(LocationQualifiedName, "%/Aisle%Entry%") | strcat "raw" "," location group_name | where like(LocationQualifiedName,"%/Aisle%Entry%") OR like(LocationQualifiedName,"%/Aisle%Exit%") | strcat "raw" "," location group_name | timechart sum(count) as cnt by location
@ITWhisperer  I have used below code to obtain token results in macros ?Please provide your suggestion, is there any changes need ?  <change> <eval token="time.earliest_epoch">if('earliest... See more...
@ITWhisperer  I have used below code to obtain token results in macros ?Please provide your suggestion, is there any changes need ?  <change> <eval token="time.earliest_epoch">if('earliest'="",0,if(isnum(strptime('earliest', "%s")),'earliest',relative_time(now(),'earliest')))</eval> <eval token="time.latest_epoch">if(isnum(strptime('latest', "%s")),'latest',relative_time(now(),'latest'))</eval> <eval token="macro_token">if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 2592000, "throughput_macro_summary_1d",if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 86400, "throughput_macro_summary_1h","throughput_macro_raw"))</eval> <eval token="form.span_token">if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 2592000, "d", if($time.latest_epoch$ - $time.earliest_epoch$ &gt; 86400, "h", $form.span_token$))</eval> </change>