All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Right you are, it was a misconfigured fw on the hosts.
Hi, I was got this issue also, direct access is able to connect but with splunk invalid error, even after add pg_hba.conf FATAL: no pg_hba.conf entry for host "10.24.154.215", user "AMDSPLUNK", dat... See more...
Hi, I was got this issue also, direct access is able to connect but with splunk invalid error, even after add pg_hba.conf FATAL: no pg_hba.conf entry for host "10.24.154.215", user "AMDSPLUNK", database "IOU", SSL on   Actually the issue it was because drivers compatibility, in this case we can try bellow drivers : 1. Splunk DBX Add-on for Postgres JDBC | Splunkbase -> Add on, works for me  2. About the JDBC Driver for Postgres - Splunk Documentation -> With .jar file, not works, but in other environment its work.   You can try it, hopefully will help 
I have a question.  We have an stand alone Splunk instance in AWS running version 7.2.3 and are looking to upgrade it to 9.3.0.  I see to get to that version I will have to do about 4 upgrades.  Also... See more...
I have a question.  We have an stand alone Splunk instance in AWS running version 7.2.3 and are looking to upgrade it to 9.3.0.  I see to get to that version I will have to do about 4 upgrades.  Also since our current version is running on RedHat version 6.4,  I would have to upgrade that to get be able to run the current version What I am curious about is, AWS has a Splunk 9.3.0 AMI with BYOL.   Would it be possible to migrate the data over to the new instance along with the configuration settings?  This is used as a customer lab so we only have about a dozen universal forwarders pointing to this server.  There are no alerts running on it and only 3 dashboards. The splunk home is stored on a separate volume than the OS so I could detach it from the old instance and attach it to the new one, or snapshot it and use the snapshot on the new one.   Any suggestions for this? Thanks.
Again - Firstly, check with tcpdump that your events do reach your destination host. If you don't see the data on the wire no magic within the OS will make it appear out of thin air.
We have setup RHEL 8.10 to be our new Splunk instance.  As before on CentOS Stream, we get syslog data from everything except the VMware host syslog data... We still have the Windows Splunk server ... See more...
We have setup RHEL 8.10 to be our new Splunk instance.  As before on CentOS Stream, we get syslog data from everything except the VMware host syslog data... We still have the Windows Splunk server around, and if we change the Syslog.global.logHost key in the Advanced System Settings on each host back to the Windows Splunk server, then the syslog data from the hosts shows up. It appears that if splunkd is running under the splunk user, then a port forwarding solution would be required to forward to a higher port for syslog.  However, splunkd is running as root, not the splunk user. Years ago, we ran Splunk on CentOS 7 and never had this issue. Is the port forwarding solution the answer here?
I think the db.system would have to match one of the systems listed in the supported databases list. On the backend, there is likely an "allow list" that checks if the database system is supported fo... See more...
I think the db.system would have to match one of the systems listed in the supported databases list. On the backend, there is likely an "allow list" that checks if the database system is supported for Query Performance before it will show up in the UI. What is the value of your db.system when you use this clickhouse driver?
We saw this when we updated a saved search to be triggered "for each result" (i.e. alert.digest_mode = 0) as opposed to the default "once" (alert.digest_mode = 1). This caused the result links to sta... See more...
We saw this when we updated a saved search to be triggered "for each result" (i.e. alert.digest_mode = 0) as opposed to the default "once" (alert.digest_mode = 1). This caused the result links to start using the loadjob command.   
Hello Splunk ES experts ,  I want to make a query which will produce MTTD (something like by analyzing the time difference between when a raw log event is ingested ( and meets the condition of a co... See more...
Hello Splunk ES experts ,  I want to make a query which will produce MTTD (something like by analyzing the time difference between when a raw log event is ingested ( and meets the condition of a correlation search ) and when a notable event is generated based on the correlation search , I have tried something below but it does not give me results I am expecting because it is not calculating time difference for those notables which are in New status , below is working fine for any other status . Can someone please help me on this , may be it is too simple to achieve and I am making this complex  index=notable | eval orig_epoch=if( NOT isnum(orig_time), strptime(orig_time, "%m/%d/%Y %H:%M:%S"), 'orig_time' ) | eval event_epoch_standardized= orig_epoch, diff_seconds='_time'-'event_epoch_standardized' | fields + _time, search_name, diff_seconds | stats count as notable_count, min(diff_seconds) as min_diff_seconds, max(diff_seconds) as max_diff_seconds, avg(diff_seconds) as avg_diff_seconds by search_name | eval avg_diff=tostring(avg_diff_seconds, "duration") | addcoltotals labelfield=search_name  
Again - it's not _where_ it's processed. It's when and how it's processed. Things are processed in search-time on indexers. And no, you cannot use indexed extractions on data where whole events aren... See more...
Again - it's not _where_ it's processed. It's when and how it's processed. Things are processed in search-time on indexers. And no, you cannot use indexed extractions on data where whole events aren't fully well-formed structured data.
many thanks to Ryan McGinn
In Splunk_TA_microsoft_sysmon\default\app.conf or Splunk_TA_microsoft_sysmon\local\app.conf add the following then deploy the SHC bundle  [shclustering] deployer_lookups_push_mode = always_overwrit... See more...
In Splunk_TA_microsoft_sysmon\default\app.conf or Splunk_TA_microsoft_sysmon\local\app.conf add the following then deploy the SHC bundle  [shclustering] deployer_lookups_push_mode = always_overwrite In the app.conf seems the best way for the sysmon TA
the -preserve-lookups true option when we did the SHC bundle push and the add-on's 3.x version of the lookup had a different field name (record_type ) vs the version in 4.x which is record_type_id.
 WE updated the Sysmon add-on from 3.x to 4.0.1 (latest) on a search head cluster. After, we're getting errors about how the node we're on and the indexers can't load a lookup (Could not load looku... See more...
 WE updated the Sysmon add-on from 3.x to 4.0.1 (latest) on a search head cluster. After, we're getting errors about how the node we're on and the indexers can't load a lookup (Could not load lookup=LOOKUP-record_type).
Disregard, issue resolved
@PickleRick I am looking for options on the indexer to convert the data to a structured format not on the search head
@kamlesh_vaghela . I want to get full event to splunk. The below sedcmd will remove first few lines and then the remaining event is viewed as json format. I want to keep full event as it is. Is there... See more...
@kamlesh_vaghela . I want to get full event to splunk. The below sedcmd will remove first few lines and then the remaining event is viewed as json format. I want to keep full event as it is. Is there a way we can apply props/transform in which splunk identifies both structured(json) and unstrutured formatted data.
Windows logs are... tricky. Whichever way you want to peocess them. If you want to use third party solution that pushes to Splunk, you have tons of problems with parsing. If you want to use Splunk to... See more...
Windows logs are... tricky. Whichever way you want to peocess them. If you want to use third party solution that pushes to Splunk, you have tons of problems with parsing. If you want to use Splunk to forward events to a third party receiver you get issues like this. Unfortunately, syslog receivers don't play nice with multiline events. What you could try is change the format of windows events to xml (which is advised anyway), do a copy of your windows events with CLONE_SOURCETYPE, remove line ends from the event with a transform applied to that new sourcetype and route that sourcetype to your syslog output. Might work, might not, just an idea from the top of my head.
you might be able to narrow down which users were on the system at the time (also any searches that might have done it even if scheduled) by running   index=_audit login attempt | table _time us... See more...
you might be able to narrow down which users were on the system at the time (also any searches that might have done it even if scheduled) by running   index=_audit login attempt | table _time user   you might have  a lot of "internal_observability" user hits that you can exclude, but then it should be broken down into actions of success or search, the search should show if any user had an outputlookup mess up the lookup file, and any of the success should just be people logging in/opening a new tab.  It might not be a smoking  gun but it will narrow down who could have  done it.
Hi I need to do observability on different web applications on Windows workstations  For example i need to mesure response time or error code of the webapp Is it possible to collect these metrics ... See more...
Hi I need to do observability on different web applications on Windows workstations  For example i need to mesure response time or error code of the webapp Is it possible to collect these metrics in splunk? How? With Splunk APM? Website monitoring? Other question : how to collect events from the Windows event viewer? Thanks 
Use an empty alternative | rex field=MESSAGE "aaa(?<FIELD1>bbb|)" | rex field=MESSAGE "ccc(?<FIELD2>ddd|)"