All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The stats command discards all fields not mentioned in the command so, in this case, only the count, user, ip, and action fields are available.  Fields cannot be re-added after they've been discarded... See more...
The stats command discards all fields not mentioned in the command so, in this case, only the count, user, ip, and action fields are available.  Fields cannot be re-added after they've been discarded by such a command.   The solution is to include the desired field(s) in the stats command. | stats count by event_time, user, ip, action This may or may not make sense depending on your data and the desired output.
It's a FYI for all using 3rd party. Chances are many not paying attention on subsecond field value.
That's right. Go to third party to fix the issue if possible. 
So, the solution is to perhaps use a different third party solution or raise defect with said third party and get them to fix their data corruption?  (Not a Splunk problem!?)
Are you trying to perform the stats by _time also ? Just add your event_time into the stats command.  Change the event_time format to only Hour and Minute or just by hour ? index=* | eval event_t... See more...
Are you trying to perform the stats by _time also ? Just add your event_time into the stats command.  Change the event_time format to only Hour and Minute or just by hour ? index=* | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | stats count by event_time, user, ip, action | iplocation ip | sort -count
Hi @hcelep , after the stats command, you have only the fileds in the command, in your case: count, user, ip and action. If you want alto the _time, you have to add it to the stats command. You ha... See more...
Hi @hcelep , after the stats command, you have only the fileds in the command, in your case: count, user, ip and action. If you want alto the _time, you have to add it to the stats command. You have two methods to do this: add it to the BY clause,  choose the first or the last value for the groups. in the first case, remember to group the timestamps using the bin command: index=* | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | bin span=1h _time | stats count by _time user ip action | iplocation ip | sort -count in the second case, taking e.g. the first occurrence: index=* | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | stats count earliest(_time) AS _time BY user ip action | iplocation ip | sort -count Ciao. Giuseppe
Hey,   I want to add _time column after stats command but I couldn't select the best command. Forexample;   index=* | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | stats count by use... See more...
Hey,   I want to add _time column after stats command but I couldn't select the best command. Forexample;   index=* | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | stats count by user, ip, action | iplocation ip | sort -count   How can I add this field?   Thanks    
There is no issue if it's HF==>IDX. HF directly sending to IDX. It happens if HF==>(popular 3rd party s2s)==>IDX. HF sending to 3rd party over s2s and 3rd party sending to splunk indexer over s2s. ... See more...
There is no issue if it's HF==>IDX. HF directly sending to IDX. It happens if HF==>(popular 3rd party s2s)==>IDX. HF sending to 3rd party over s2s and 3rd party sending to splunk indexer over s2s.
Hi Basically you shouldn't uninstall previous versions when you are upgrading. If you uninstall it first then you are losing also fishbucket db which keep track ingested events. Basically that means... See more...
Hi Basically you shouldn't uninstall previous versions when you are upgrading. If you uninstall it first then you are losing also fishbucket db which keep track ingested events. Basically that means reinvesting all files etc. When you are updating UF version you should follow the same version path as defined for full enterprise. And remember to restart after every version update as otherwise e.g. DB conversions hasn't done. As @gcusello said there is a new feature which allow update those UF binaries. Currently it's a beta or restricted to some customers. You could see more from voc.splunk.com. If you have some other system management tools, then you can use those to update binaries, but again you must add needed steps to those workflows. r. Ismo
yes, this: spath input=msg query{} is probably just some cheating to see some data, but it does not actually work. It is just some overcomplicated syntax to see the whole array, but not individual ... See more...
yes, this: spath input=msg query{} is probably just some cheating to see some data, but it does not actually work. It is just some overcomplicated syntax to see the whole array, but not individual items; if the array would have more items, we won't get first item. If I need to get first item in array to work further, this just does not work
how works
Hi If you couldn't found any reasonable reason from logs, you should create support case to splunk. r. Ismo
this was the solution! please explain to me why: <eval token="form.Tail">if($click.name2$=$form.Tail$, "*", $click.name2$)</eval> works but: <eval token="form.Tail">if("$form.Tail$"==* OR "$form.... See more...
this was the solution! please explain to me why: <eval token="form.Tail">if($click.name2$=$form.Tail$, "*", $click.name2$)</eval> works but: <eval token="form.Tail">if("$form.Tail$"==* OR "$form.Tail$"=="ALL", "$click.name2$",*)</eval>   didn't? it seems to me im just asking to opposit questions and switching the reponse to match.
It looks that way - what configurations are you using?
Just a note on your use of dedup, you will only end up with a single event from ONE of the indexes (whichever is found first), which is one of the reasons why your search is not working as expected. ... See more...
Just a note on your use of dedup, you will only end up with a single event from ONE of the indexes (whichever is found first), which is one of the reasons why your search is not working as expected. You COULD use | dedup index src_ip dest_ip which would leave you one event from EACH index, however, as @yuanliu has said, fields + stats + rename is generally the optimal way to do the grouping. However, consider what exactly do you want to see in the other fields, using dedup would only give you ONE value from the event that remains after the dedup, but the stats values(*) as * would give you all values from all events for each of the src_ip grouping. Avoid join - it's not a Splunk way to do things, has significant limitations and will silently discard data leading to variable results. stats is always the way to join data sets.
Thanks for your support and help. I worked more with my search, and I found a way to craft the search with some additional limiting material to extract what I want. The next step that was to join ... See more...
Thanks for your support and help. I worked more with my search, and I found a way to craft the search with some additional limiting material to extract what I want. The next step that was to join the material with the results from this subsearch, and even that was successful, so right now it looks like I have been able to solve the problems I had. In any case, the core of my original problem was related to using localize and map in a saved search, and that has now been resolved. In addition I managed to improve my search, so I think we can both be happy about it.
When using OR, you cannot dedup src_ip dest_ip immediately after search.  That should be performed after stats, like what you do with join. Using the same structure @gcusello proposed, you can do  ... See more...
When using OR, you cannot dedup src_ip dest_ip immediately after search.  That should be performed after stats, like what you do with join. Using the same structure @gcusello proposed, you can do   (index=*corelight* sourcetype=*corelight* server_name=*microsoft.com*) OR (index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100) | fields src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | stats values(*) AS * BY dest_ip | dedup src_ip, dest_ip ``` most likely this is unnecessary after stats ``` | rename src_zone AS From, src_ip AS Source, dest_zone AS To, dest_ip AS Destination, server_name AS SNI, transport AS Protocol, dest_port AS Port, app AS "Application", rule AS "Rule", action AS "Action", session_end_reason AS "End Reason", packets_out AS "Packets Out", packets_in AS "Packets In", src_translated_ip AS "Egress IP", dvc_name AS "DC"    
Just use this in the drilldown <eval token="form.Tail">if($click.name2$=$form.Tail$, "*", $click.name2$)</eval> So, if the clicked value is the same as the current form value, then it sets the form... See more...
Just use this in the drilldown <eval token="form.Tail">if($click.name2$=$form.Tail$, "*", $click.name2$)</eval> So, if the clicked value is the same as the current form value, then it sets the form value to * (which in my example is the value for the All dropdown option) otherwise it sets the form value to the clicked legend. Full working example below <form version="1.1" theme="light"> <label>Tail</label> <fieldset submitButton="false"> <input type="dropdown" token="Tail" searchWhenChanged="true"> <label>Tail</label> <choice value="*">All</choice> <choice value="1">1</choice> <choice value="2">2</choice> <choice value="3">3</choice> <default>*</default> </input> </fieldset> <row> <panel> <chart> <search> <query>| makeresults count=60 | eval Tail=random() % 3 | streamstats c | eval r=random() % 100 | eval source=random() % 10 | search Tail=$Tail$ | chart count over source by Tail</query> <earliest>-30m@m</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="charting.chart">column</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">all</option> <option name="refresh.display">progressbar</option> <drilldown> <eval token="form.Tail">if($click.name2$=$form.Tail$, "*", $click.name2$)</eval> </drilldown> </chart> </panel> </row> </form>  
Hi @dmngaya , at first, as @yuanliu said, please share also samples in text format (using the Insert/Edit Code Sample button). Then don't use the search command after the main search because your s... See more...
Hi @dmngaya , at first, as @yuanliu said, please share also samples in text format (using the Insert/Edit Code Sample button). Then don't use the search command after the main search because your search will be slower: if possible, put all the search terms in the main search. then, in your search I don't see the login failed condition (e.g. EventCode=4625 in Windows) and you need it in the main search. Then, I suppose that you need to check the condition for each host in your infrastructure and each account. Anyway, you have to use the stats command to aggregate results and the where command to filter them, something like this: (for the login failed condition I use the one from Windows, replace it with your condition) index=nprod_database sourcetype=tigergraph:app:auditlog:8542 host=VCAUSC11EUAT* userAgent OR "actionName":"login" "timestamp":"2025-01-07T*" EventCode=4625 | stats count BY user host | where count>3 Adapt it to your real case. Ciao. Giuseppe
Hi @sdcig , I simplified the search, in the stats command replace the values(*) As * with the five fields you want: (index=*corelight* sourcetype=*corelight* server_name=*microsoft.com*) OR (index=... See more...
Hi @sdcig , I simplified the search, in the stats command replace the values(*) As * with the five fields you want: (index=*corelight* sourcetype=*corelight* server_name=*microsoft.com*) OR (index="*firewall*" sourcetype=*traffic* src_ip=10.1.1.100) | dedup src_ip, dest_ip | fields src_zone, src_ip, dest_zone, dest_ip, server_name, transport, dest_port, app, rule, action, session_end_reason, packets_out, packets_in, src_translated_ip, dvc_name | stats values(src_zone) AS From values(src_ip) AS Source, values(dest_zone) AS To values(server_name) AS SNI values(transport) AS Protocol values(dest_port) AS Port values(app) AS Application values(rule) AS Rule values(action) AS Action values(session_end_reason) AS "End Reason" values(packets_out) AS "Packets Out" values(packets_in) AS "Packets In" values(src_translated_ip) AS "Egress IP" values(dvc_name) AS DC values(src_zone) AS src_zone BY dest_ip | rename dest_ip AS Destination if there are fields with different names between the two indexes, use eval coalesce to have the same field name. Ciao. Giuseppe