All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, Need below search into a web datmodel search  index=es_web action=blocked host= * sourcetype= * | stats count by category | sort 5 -count thanks
Hi @scout29 , please try something like this: | tstats count where index=abc BY host | append [ | inputlookup hosts.csv | eval count=0 | fields host count] | stats sum(count) AS total BY host | wh... See more...
Hi @scout29 , please try something like this: | tstats count where index=abc BY host | append [ | inputlookup hosts.csv | eval count=0 | fields host count] | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe
https://community.splunk.com/t5/Splunk-Search/How-can-I-add-metadata-to-events-at-the-forwarder/m-p/666062/highlight/false#M228500 For the metadata add Thanks @rphillips_splk 
Hi everyone, Do you know a way to change the value of a metadata for a universal forwader ? I add my own metadata with _meta = id::Mik. Added in fields, I can see it in my events. Now I'de like to... See more...
Hi everyone, Do you know a way to change the value of a metadata for a universal forwader ? I add my own metadata with _meta = id::Mik. Added in fields, I can see it in my events. Now I'de like to change the value Mik with another value. Can I use python Api ? Splunk Cli ? Rest api ? It seem's to be a lot of solutions but for the moment I can't find one. Thanks
Hi @ivan123357 , I'd try something like this: index=custom (evt_id=1 OR evt_id=2) earliest=-5m latest=-7m | stats last(evt_id) AS evt_id earliest(_time) AS earliest latest(_time) AS lates... See more...
Hi @ivan123357 , I'd try something like this: index=custom (evt_id=1 OR evt_id=2) earliest=-5m latest=-7m | stats last(evt_id) AS evt_id earliest(_time) AS earliest latest(_time) AS latest BY user_id | where evt_id=1 OR (latest-earliest>300) Ciao. Giuseppe
ne in the future, this is the final query I went with. I was trying to group any event in a certain index and sourcetype.  index=test sourcetype=test2 source=* | rex field=test_city "(?<city>[A-... See more...
ne in the future, this is the final query I went with. I was trying to group any event in a certain index and sourcetype.  index=test sourcetype=test2 source=* | rex field=test_city "(?<city>[A-Za-z]+)_$" | eval has_true_port = case( port_123="True" OR port_139="True" OR port_21="True" OR port_22="True" OR port_25="True" OR port_3389="True" OR port_443="True" OR port_445="True" OR port_53="True" OR port_554="True" OR port_80="True", "Yes", true(), "No" ) | where has_true_port = "Yes" | stats values(port_123) as port_123, values(port_139) as port_139, values(port_21) as port_21, values(port_22) as port_22, values(port_25) as port_25, values(port_3389) as port_3389, values(port_443) as port_443, values(port_445) as port_445, values(port_53) as port_53, values(port_554) as port_554, values(port_80) as port_80 values(city) as City by destination, test_src_ip | eval open_ports = if(port_123="True", "123,", "") . if(port_139="True", "139,", "") . if(port_21="True", "21,", "") . if(port_22="True", "22,", "") . if(port_25="True", "25,", "") . if(port_3389="True", "3389,", "") . if(port_443="True", "443,", "") . if(port_445="True", "445,", "") . if(port_53="True", "53,", "") . if(port_554="True", "554,", "") . if(port_80="True", "80,", "") | eval open_ports = rtrim(open_ports, ",") | table destination, test_src_ip City open_ports The result looks a bit like this:   Basically, this combines each open port into one row while also sorting by destination ip and source IP  
Hi, this solution has been cancelled 
  Hello to everyone! I have a strange issue with some events that come from our virtual environment. As you can see in the screenshot, some events become multi-lined, but I don't want to o... See more...
  Hello to everyone! I have a strange issue with some events that come from our virtual environment. As you can see in the screenshot, some events become multi-lined, but I don't want to observe such behavior. I suspected the SHOULD_LINEMERGE setting, which can merge events with similar timestamps, but now I turned it off. Below the Splunk search window I attached a result from regex101.com that shows the working of my LINE_BREAKING option. I also tried to use the default ([\r\n]+) regex, but it did not work either. What did I miss in my approach? My props.conf for the desired sourcetype looks like: [vsi_esxi_syslog] LINE_BREAKER = ([\r\n]+)|(\n+) SHOULD_LINEMERGE = false TIME_PREFIX = ^<\d+>
Also, I did look at the metrics.log and it shows the connections from the server sending the logs, but nothing still in the index. Below is an example of the connection (I have x'd out the IP) 10-25... See more...
Also, I did look at the metrics.log and it shows the connections from the server sending the logs, but nothing still in the index. Below is an example of the connection (I have x'd out the IP) 10-25-2023 16:22:34.165 +0000 INFO Metrics - group=tcpin_connections, x.x.x.x:31311:6514, connectionType=rawSSL, sourcePort=31311, sourceHost=x.x.x.x, sourceIp=x.x.x.x, destPort=6514, kb=0.000, _tcp_Bps=0.000, _tcp_KBps=0.000, _tcp_avg_thruput=0.000, _tcp_Kprocessed=0.000, _tcp_eps=0.000, _process_time_ms=0, evt_misc_kBps=0.000, evt_raw_kBps=0.000, evt_fields_kBps=0.000, evt_fn_kBps=0.000, evt_fv_kBps=0.000, evt_fn_str_kBps=0.000, evt_fn_meta_dyn_kBps=0.000, evt_fn_meta_predef_kBps=0.000, evt_fn_meta_str_kBps=0.000, evt_fv_num_kBps=0.000, evt_fv_str_kBps=0.000, evt_fv_predef_kBps=0.000, evt_fv_offlen_kBps=0.000, evt_fv_fp_kBps=0.000
What do you get from just doing the index search? index="dynatrace" [| makeresults | eval earliest=relative_time(now(),"$tr_14AGuxUA.earliest$"), latest=relative_time(now(),"$tr_14AGuxUA.latest$") ... See more...
What do you get from just doing the index search? index="dynatrace" [| makeresults | eval earliest=relative_time(now(),"$tr_14AGuxUA.earliest$"), latest=relative_time(now(),"$tr_14AGuxUA.latest$") | table earliest latest] Also, what do the events look like, particularly the userActions object? You may need to further "expand" the userActions{} object.
Looking to create an alert if a host on a lookup stops sending data to Splunk index=abc. I have created a lookup called hosts.csv with all the hosts expected to be logging for a data source. Now i ne... See more...
Looking to create an alert if a host on a lookup stops sending data to Splunk index=abc. I have created a lookup called hosts.csv with all the hosts expected to be logging for a data source. Now i need to create a search/alert that notifies me if a host on this lookup stops sending data to index=abc I was trying something like this  search below, but now having much luck: | tstats count where index=abc host NOT [| inputlookup hosts.csv] by host   The lookup called hosts.csv is formatted with the column name being host, for example like:   host hostname101 hostname102 hostname103 hostname104  
The event code description trimming is not turned on by default. You need to specifically turn it on in a local props.conf.
Assuming you are correlating by user id (as you don't appear to have the action id in your index), you could do something like this index=custom evt_id=1 OR evt_id=2 earliest=-7m latest=now | stats ... See more...
Assuming you are correlating by user id (as you don't appear to have the action id in your index), you could do something like this index=custom evt_id=1 OR evt_id=2 earliest=-7m latest=now | stats latest(evt_id) as last_event latest(_time) as last_time by user_id | where last_event=1 AND last_time < relative_time(now(), "-5m")
Tabling works, but that's not enough if you need to carry along hidden values for Drilldowns. Tokens "kind of" work with the <fields> tag, but they don't seem to update when the token changes. Classi... See more...
Tabling works, but that's not enough if you need to carry along hidden values for Drilldowns. Tokens "kind of" work with the <fields> tag, but they don't seem to update when the token changes. Classic Splunk W.
Hi @PickleRick , Thank you so much i got the query what i was expecting. But i have some changes here can you help me on that Standby Column i don't have any individual searches  like url and cle... See more...
Hi @PickleRick , Thank you so much i got the query what i was expecting. But i have some changes here can you help me on that Standby Column i don't have any individual searches  like url and cleared_log i just need standby count as Toatal _Messages  - Errro_count  should be displyed on the table. Rest all will be the same. |tstats count as Total_Messages where index=app-logs TERM(Request) TERM(received) TERM(from) TERM(all) TERM(applications)  
@MikeyD100 - I would suggest to create a customer support case with Splunk.  
This might be irrelevant, but you appear to have 9 decimal places in your timestamp not 6 (%6N),, and your timezone variable looks like it should be "%:z" not just "%z", and your sample json is not v... See more...
This might be irrelevant, but you appear to have 9 decimal places in your timestamp not 6 (%6N),, and your timezone variable looks like it should be "%:z" not just "%z", and your sample json is not valid (although this could just be a copy/paste/anonymisation artefact). Also, are you sure 1000 is enough of a lookahead?
Wow finally got this to work.  I'm not sure why but I had to refresh and it started working.  All the other numbers populated.  Thank you so much for your help!!!!  Here is a screen shot that could h... See more...
Wow finally got this to work.  I'm not sure why but I had to refresh and it started working.  All the other numbers populated.  Thank you so much for your help!!!!  Here is a screen shot that could help someone else with the final code that was successful.      
So we are trying to send Syslog from our BeyondTrust PRA Appliance to Splunk. We have validated via the SSL/TLS test that the connection is good. I have the cert at both sides so this appears to be o... See more...
So we are trying to send Syslog from our BeyondTrust PRA Appliance to Splunk. We have validated via the SSL/TLS test that the connection is good. I have the cert at both sides so this appears to be okay. We do not see the evens in the index though. Configured inputs.conf in the /local folder as follows: [tcp-ssl://6514] disabled = false [SSL] requireClientCert = false serverCert = /opt/splunk/etc/auth/custom/combined.cer sslVersions = tls1.2 cipherSuite = AES256-SHA We have the input setup in the web interface and have the correct index and source defined. No events coming in though. I've seen several articles from multiple years back on configuring this. The TLS handshake works, what are we missing? Thanks in advance! FYI: Tried this over UDP using a non TLS input and the data comes in fine, but when we try with SSL it never shows up in the index.
The procedure varies depending on the environment. In a standalone server or independent search heads, indexers, and heavy forwarders, just remove the app directory from $SPLUNK_HOME/etc/apps and re... See more...
The procedure varies depending on the environment. In a standalone server or independent search heads, indexers, and heavy forwarders, just remove the app directory from $SPLUNK_HOME/etc/apps and restart Splunk. In a search head cluster, remove the app from $SPLUNK_HOME/etc/shcluster on the SHC Deployer and push the shbundle. In an indexer cluster, remove the app from $SPLUNK_HOME/etc/manager-apps (or master-apps) and push the bundle. For universal forwarders, remove the app from the appropriate server class(es).  If no clients use the app, it can be removed from $SPLUNK_HOME/etc/deployment-apps.