All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, this solution has been cancelled 
  Hello to everyone! I have a strange issue with some events that come from our virtual environment. As you can see in the screenshot, some events become multi-lined, but I don't want to o... See more...
  Hello to everyone! I have a strange issue with some events that come from our virtual environment. As you can see in the screenshot, some events become multi-lined, but I don't want to observe such behavior. I suspected the SHOULD_LINEMERGE setting, which can merge events with similar timestamps, but now I turned it off. Below the Splunk search window I attached a result from regex101.com that shows the working of my LINE_BREAKING option. I also tried to use the default ([\r\n]+) regex, but it did not work either. What did I miss in my approach? My props.conf for the desired sourcetype looks like: [vsi_esxi_syslog] LINE_BREAKER = ([\r\n]+)|(\n+) SHOULD_LINEMERGE = false TIME_PREFIX = ^<\d+>
Also, I did look at the metrics.log and it shows the connections from the server sending the logs, but nothing still in the index. Below is an example of the connection (I have x'd out the IP) 10-25... See more...
Also, I did look at the metrics.log and it shows the connections from the server sending the logs, but nothing still in the index. Below is an example of the connection (I have x'd out the IP) 10-25-2023 16:22:34.165 +0000 INFO Metrics - group=tcpin_connections, x.x.x.x:31311:6514, connectionType=rawSSL, sourcePort=31311, sourceHost=x.x.x.x, sourceIp=x.x.x.x, destPort=6514, kb=0.000, _tcp_Bps=0.000, _tcp_KBps=0.000, _tcp_avg_thruput=0.000, _tcp_Kprocessed=0.000, _tcp_eps=0.000, _process_time_ms=0, evt_misc_kBps=0.000, evt_raw_kBps=0.000, evt_fields_kBps=0.000, evt_fn_kBps=0.000, evt_fv_kBps=0.000, evt_fn_str_kBps=0.000, evt_fn_meta_dyn_kBps=0.000, evt_fn_meta_predef_kBps=0.000, evt_fn_meta_str_kBps=0.000, evt_fv_num_kBps=0.000, evt_fv_str_kBps=0.000, evt_fv_predef_kBps=0.000, evt_fv_offlen_kBps=0.000, evt_fv_fp_kBps=0.000
What do you get from just doing the index search? index="dynatrace" [| makeresults | eval earliest=relative_time(now(),"$tr_14AGuxUA.earliest$"), latest=relative_time(now(),"$tr_14AGuxUA.latest$") ... See more...
What do you get from just doing the index search? index="dynatrace" [| makeresults | eval earliest=relative_time(now(),"$tr_14AGuxUA.earliest$"), latest=relative_time(now(),"$tr_14AGuxUA.latest$") | table earliest latest] Also, what do the events look like, particularly the userActions object? You may need to further "expand" the userActions{} object.
Looking to create an alert if a host on a lookup stops sending data to Splunk index=abc. I have created a lookup called hosts.csv with all the hosts expected to be logging for a data source. Now i ne... See more...
Looking to create an alert if a host on a lookup stops sending data to Splunk index=abc. I have created a lookup called hosts.csv with all the hosts expected to be logging for a data source. Now i need to create a search/alert that notifies me if a host on this lookup stops sending data to index=abc I was trying something like this  search below, but now having much luck: | tstats count where index=abc host NOT [| inputlookup hosts.csv] by host   The lookup called hosts.csv is formatted with the column name being host, for example like:   host hostname101 hostname102 hostname103 hostname104  
The event code description trimming is not turned on by default. You need to specifically turn it on in a local props.conf.
Assuming you are correlating by user id (as you don't appear to have the action id in your index), you could do something like this index=custom evt_id=1 OR evt_id=2 earliest=-7m latest=now | stats ... See more...
Assuming you are correlating by user id (as you don't appear to have the action id in your index), you could do something like this index=custom evt_id=1 OR evt_id=2 earliest=-7m latest=now | stats latest(evt_id) as last_event latest(_time) as last_time by user_id | where last_event=1 AND last_time < relative_time(now(), "-5m")
Tabling works, but that's not enough if you need to carry along hidden values for Drilldowns. Tokens "kind of" work with the <fields> tag, but they don't seem to update when the token changes. Classi... See more...
Tabling works, but that's not enough if you need to carry along hidden values for Drilldowns. Tokens "kind of" work with the <fields> tag, but they don't seem to update when the token changes. Classic Splunk W.
Hi @PickleRick , Thank you so much i got the query what i was expecting. But i have some changes here can you help me on that Standby Column i don't have any individual searches  like url and cle... See more...
Hi @PickleRick , Thank you so much i got the query what i was expecting. But i have some changes here can you help me on that Standby Column i don't have any individual searches  like url and cleared_log i just need standby count as Toatal _Messages  - Errro_count  should be displyed on the table. Rest all will be the same. |tstats count as Total_Messages where index=app-logs TERM(Request) TERM(received) TERM(from) TERM(all) TERM(applications)  
@MikeyD100 - I would suggest to create a customer support case with Splunk.  
This might be irrelevant, but you appear to have 9 decimal places in your timestamp not 6 (%6N),, and your timezone variable looks like it should be "%:z" not just "%z", and your sample json is not v... See more...
This might be irrelevant, but you appear to have 9 decimal places in your timestamp not 6 (%6N),, and your timezone variable looks like it should be "%:z" not just "%z", and your sample json is not valid (although this could just be a copy/paste/anonymisation artefact). Also, are you sure 1000 is enough of a lookahead?
Wow finally got this to work.  I'm not sure why but I had to refresh and it started working.  All the other numbers populated.  Thank you so much for your help!!!!  Here is a screen shot that could h... See more...
Wow finally got this to work.  I'm not sure why but I had to refresh and it started working.  All the other numbers populated.  Thank you so much for your help!!!!  Here is a screen shot that could help someone else with the final code that was successful.      
So we are trying to send Syslog from our BeyondTrust PRA Appliance to Splunk. We have validated via the SSL/TLS test that the connection is good. I have the cert at both sides so this appears to be o... See more...
So we are trying to send Syslog from our BeyondTrust PRA Appliance to Splunk. We have validated via the SSL/TLS test that the connection is good. I have the cert at both sides so this appears to be okay. We do not see the evens in the index though. Configured inputs.conf in the /local folder as follows: [tcp-ssl://6514] disabled = false [SSL] requireClientCert = false serverCert = /opt/splunk/etc/auth/custom/combined.cer sslVersions = tls1.2 cipherSuite = AES256-SHA We have the input setup in the web interface and have the correct index and source defined. No events coming in though. I've seen several articles from multiple years back on configuring this. The TLS handshake works, what are we missing? Thanks in advance! FYI: Tried this over UDP using a non TLS input and the data comes in fine, but when we try with SSL it never shows up in the index.
The procedure varies depending on the environment. In a standalone server or independent search heads, indexers, and heavy forwarders, just remove the app directory from $SPLUNK_HOME/etc/apps and re... See more...
The procedure varies depending on the environment. In a standalone server or independent search heads, indexers, and heavy forwarders, just remove the app directory from $SPLUNK_HOME/etc/apps and restart Splunk. In a search head cluster, remove the app from $SPLUNK_HOME/etc/shcluster on the SHC Deployer and push the shbundle. In an indexer cluster, remove the app from $SPLUNK_HOME/etc/manager-apps (or master-apps) and push the bundle. For universal forwarders, remove the app from the appropriate server class(es).  If no clients use the app, it can be removed from $SPLUNK_HOME/etc/deployment-apps.
Hi Team, We need your help in reviewing Data Collectors created in AppDynamics for .NET WCF service. We need your guidance as newly Data Collectors are not working. Thank you, Deepak Paste
Hi, I am trying to upload elastic log file to splunk this is an example of one entry in a long log: {"_index":"index-00","_type":"_doc","_id":"TyC0RIkBQC0jFzdXd-XG","_score":1,"_source":"{"somethi... See more...
Hi, I am trying to upload elastic log file to splunk this is an example of one entry in a long log: {"_index":"index-00","_type":"_doc","_id":"TyC0RIkBQC0jFzdXd-XG","_score":1,"_source":"{"something_long":"long json"}\n","stream":"stderr","docker":{"container_id":"d48887cdb80442f483a876b9f2cd351ae02a8712ec20960a9dc66559b8ccce87"},"kubernetes":{"container_name":"container","namespace_name":"namespace","pod_name":"service-576c4bcccf-75gzq","container_image":"art.com:6500/3rdparties/something/something-agent:1.6.0","container_image_id":"docker-pullable://art.com:6500/3rdparties/something/something-agent@sha256:02b855e32321c55ffb1b8fefc68b3beb6","pod_id":"3c90db56-3013a73e5","host":"worker-3","labels":{"app":"image-service","pod-template-hash":"576c4bcccf","role":"image-ervice"}},"level":"info","ts":1689074778.913063,"caller":"peermgr/peer_mgr.go:157","msg":"Not enough connected peers","connected":0,"required":1,"@timestamp":"2023-07-11T11:26:19.133326179+00:00"}} As you can see the timestamp is at the end. So I have setup my props.conf for the following: [elastic_logs] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom description = make sure timestamp is taken pulldown_type = 1 TIME_PREFIX = "@timestamp":\s*" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%z MAX_TIMESTAMP_LOOKAHEAD = 1000   I can see the timestamp in splunk entries, but that is all I can see now. all the other fields are not displayed. what am I doing wrong?
Hello, We have few apps that are no longer needed in our on premise environment. We maintain git repo for configs. Can anyone please help me with the steps to uninstall/remove the app.     Thanks
Completed this,  I added | table status, range got rid of any colour on the dashboard and the colour of the range took over.  
index=serverX sourcetype=CAServer  | dedup ID | stats count | eval status=if(count=00,"XXX is ok","XXX is not ok") | rangemap field=count low=0-0 severe=1-100 This works and replies with 34 co... See more...
index=serverX sourcetype=CAServer  | dedup ID | stats count | eval status=if(count=00,"XXX is ok","XXX is not ok") | rangemap field=count low=0-0 severe=1-100 This works and replies with 34 counts and is red, however i want to return the status with the red not just the number. I can return the status with  | stats status   but it is in black and white, any help is appreciated. 
How can I get the SVC Usage for Saved Searches and Ad-hoc searches?  These logs don't have it. index="_internal" data.searchType="scheduled"   index="_audit" sourcetype="audittrail" action="search... See more...
How can I get the SVC Usage for Saved Searches and Ad-hoc searches?  These logs don't have it. index="_internal" data.searchType="scheduled"   index="_audit" sourcetype="audittrail" action="search" info="completed"