All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have a puzzle with a Linux host running RHEL 8.10, which is running Splunk Universal Forwarder 9.4.1, configured to forward data from local syslog files "/var/log/secure" and "/var/log/messages" to... See more...
I have a puzzle with a Linux host running RHEL 8.10, which is running Splunk Universal Forwarder 9.4.1, configured to forward data from local syslog files "/var/log/secure" and "/var/log/messages" to our Splunk indexers in the Splunk Cloud. Events from /var/log/secure are found there as expected. But no events are found from /var/log/messages. To troubleshoot, I did find these messages in the _internal index from the host: 06-02-2025 15:01:05.507 -0400 INFO WatchedFile [3811453 tailreader0] - Will begin reading at offset=13553847 for file='/var/log/messages'. 06-01-2025 03:21:02.729 -0400 INFO WatchedFile [2392 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/messages'. So the file was read but no events found in Splunk?   [Edit 2025-06-09] The file inputs are configured with a simple stanza in a custom TA: [monitor:///var/log] whitelist = (messages$|secure$) index = os disabled = 0 As the stanza shows, two files are forwarded: /var/log/messages and /var/log/secure. With this search: | tstats count where index=os host=server-name-* by host source I get these results: host source count server-name-a /var/log/secure 39795 server-name-b /var/log/messages 112960 server-name-b /var/log/secure 21938 Server a and b are a pair running the same OS, patches, applications, etc..
Since you're posting this in the Cloud section I'm assuming you're gonna want to send the events to Cloud. There is no way to send such data directly to Cloud. So you will need at least one server ... See more...
Since you're posting this in the Cloud section I'm assuming you're gonna want to send the events to Cloud. There is no way to send such data directly to Cloud. So you will need at least one server on-prem to gather data from your local environment. Depending on your sources and what you want to collect (and the goal of your data collection) you might use one of various possible syslog receiving methods (dedicated syslog daemon - rsyslog or syslog-ng either writing to files or sending directly to HEC input or SC4S). There are multiple methods of handling SNMP (SNMP modular input, SC4SNMP, self-configured external snmp collecting script and/or snmptrapd). And API enpoints can be handled by some existing TAs or you can try to handle them on your own by external scripts or Addon-builder created API inputs. So there is plethora of possibilities.
App's description on Splunkbase doesn't list any specific external licensing requirements so you should be able to just download the app from there and use it.
I did run 'splunk resync shcluster-replicated-config' .  I left it overnight and somehow SH3 sync'd itself.  I also became the captain, which I changed back.  Ran a sync on SH1 and all good now. No ... See more...
I did run 'splunk resync shcluster-replicated-config' .  I left it overnight and somehow SH3 sync'd itself.  I also became the captain, which I changed back.  Ran a sync on SH1 and all good now. No clue how or why it resync'd itself after many failed tries and clean ups.
Please use code block or preformatted paragraphs for config snippets. It greatly improves readability. And your config's syntax is completely off. If that is your actual config, use splunk btool ch... See more...
Please use code block or preformatted paragraphs for config snippets. It greatly improves readability. And your config's syntax is completely off. If that is your actual config, use splunk btool check to verify your config. If not, please copy-paste your literal settings.
There are several possible approaches to such problem. One is filldown mentioned already by @richgalloway. Another is streamstats or autoregress. Or you might simply reformulate your problem to avoid... See more...
There are several possible approaches to such problem. One is filldown mentioned already by @richgalloway. Another is streamstats or autoregress. Or you might simply reformulate your problem to avoid this altogether. It all depends on particular use case.
Hi @harpr86  Unfortunately this isn’t possible. I think this is the same when using the UI? Eg you create a search and it starts as private and then you have to update the permission to be shared.  ... See more...
Hi @harpr86  Unfortunately this isn’t possible. I think this is the same when using the UI? Eg you create a search and it starts as private and then you have to update the permission to be shared.  I hope this helps, sorry it isn’t the answer you might have hoped for! 
It would help to know more about your use case including any existing SPL you're using. Have you looked at the filldown command?
Hi @ayomotukoya , I suppose that the transforms.conf isn't correct: [snmp_hostname_change] DEST_KEY = MetaData::Host REGEX = Agent_Hostname\s*\=\s*(.*) FORMAT = host::$1 I can be more detailed and... See more...
Hi @ayomotukoya , I suppose that the transforms.conf isn't correct: [snmp_hostname_change] DEST_KEY = MetaData::Host REGEX = Agent_Hostname\s*\=\s*(.*) FORMAT = host::$1 I can be more detailed and sure if you could share a sample of your logs. Ciao. Giuseppe
Seeing events now.  The default template needed a notification assigned and that notification needed to be defined as there was none.  The error mentioned above is still showing but am not sure if it... See more...
Seeing events now.  The default template needed a notification assigned and that notification needed to be defined as there was none.  The error mentioned above is still showing but am not sure if it is causing any seen issues.
Hi all. Having an issue with hostname override for snmp logs. An issue I’m having is i created this props and transforms to get the agent_hostname from the logs to override the host (syslog011) for t... See more...
Hi all. Having an issue with hostname override for snmp logs. An issue I’m having is i created this props and transforms to get the agent_hostname from the logs to override the host (syslog011) for these snmp trap logs but it doesn’t seem to have worked. Not sure what the mistake is herE. TRANSFORMS.CONF [snmptrapd_kv] DELIMS - "\n," =" [snmp_hostname_change] DEST_KEY-MetaData: : Host REGEX-Agent_Hostname = (•*) FORMAT-host:: $1 PROPS.CONF [snmptrapd] disabled = false LINE BREAKER = ([\r\n]+) Agent_ Address\s= MAX TIMESTAMP LOOKAHEAD = 30 NO_BINARY_CHECK - true SHOULD LINEMERGE = false TIME _FORMAT = SY-8m-%d 8H:&M: :S TIME _PREFIX = Datels=\s EXTRACT-node = ^[^\[\n]*\[(?P<node>[^\]]+) REPORT-snmptrapd = snmptrapd_kv TRANSFORMS-snmp_hostname_change = snmp_hostname_change  
@livehybrid thanks for your response. but,  I am looking to perform the two operation in single api.   for example, at the time of creation of splunk alert  , alert should have permission of r+w to... See more...
@livehybrid thanks for your response. but,  I am looking to perform the two operation in single api.   for example, at the time of creation of splunk alert  , alert should have permission of r+w to user.
I have successfully setup AME and tested the tenant connection and get back connector is healthy.  I can also send test event from the tenant setup page and can see it in the default index.  If I go ... See more...
I have successfully setup AME and tested the tenant connection and get back connector is healthy.  I can also send test event from the tenant setup page and can see it in the default index.  If I go to events there is not test not any of the alerts I have configured to send to AME even though I can see them in the traditional triggered alerts as they are still configured as well.  Looking in _internal I do see the below error: 2025-06-06T11:24:06.612+00:00 version=3.4.0 log_level=ERROR pid=1615220 s=AbstractHECWrapper.py:send_chunk:304 uuid=***************** action=sending_event reason="[Errno 111] Connection refused" Seems to suggest there is an issue with HEC, but the tenant shows green/healthy and the test comes to the index.  Any assistance would be appreaciated. Also, if I create an event from the Events page, that does show up in the app:    
Yah.  And ole Lizzy Li, the principal product manager, indicated in a .conf session that "parity" between Classic and DS was their intent.  So much for parity.
Does splunk support fill-forward or "last observation carried forward". I want to create a daily based monitoring. One example is getting the version of all reported items. I'm getting the versi... See more...
Does splunk support fill-forward or "last observation carried forward". I want to create a daily based monitoring. One example is getting the version of all reported items. I'm getting the version only if it is changed. For each day I need the last available version of the item. How can this be realized with splunk to realize a line-chart?   Thank you in advance Markus
I have promote multiple events into a case. From the case, I will run a playbook.  I understand that I can use the following container automations to set the status to close. phantom.update() p... See more...
I have promote multiple events into a case. From the case, I will run a playbook.  I understand that I can use the following container automations to set the status to close. phantom.update() phantom.close() phantom.set_status() However, these 3 playbook is only able to set the case's status to close. Is it possible to set the status of the promoted events within the case to close also?  For example, I have the following events. Event #1 Event #2 Event #3 When these 3 events are promoted to a case. And I run the playbook from this case, is it possible to set the status of this case and the 3 events to close . 
@Leonardo1998  In addition to other recommendations: You can configure a dedicated VM and install either syslog-ng or rsyslog, making it act as a syslog forwarder. Network Devices (such as firewal... See more...
@Leonardo1998  In addition to other recommendations: You can configure a dedicated VM and install either syslog-ng or rsyslog, making it act as a syslog forwarder. Network Devices (such as firewalls, routers, and switches) can then be configured to send logs over a custom port to this syslog forwarder. On the syslog forwarder, update the syslog-ng.conf or rsyslog.conf to capture these logs and store them in a specific directory. From here, you have two options: Install the Splunk Universal Forwarder (UF) on the server and configure it to forward the logs to the Splunk indexers. Or, install the full Splunk Enterprise package on the server and use it as a Heavy Forwarder (HF). If the server is used as a Heavy Forwarder, you can also install the relevant Technology Add-ons (TAs) for parsing. For example, if you're onboarding Fortinet firewall logs, you can install the Fortinet Add-on on this HF for proper parsing before forwarding the logs to the indexers. https://www.splunk.com/en_us/blog/tips-and-tricks/using-syslog-ng-with-splunk.html?locale=en_us 
I started looking into Splunk Connect for SNMP (SC4SNMP) and I'm reviewing the documentation and requirements. One thing I'm not entirely sure about: Can I install SC4SNMP (Docker container) on the... See more...
I started looking into Splunk Connect for SNMP (SC4SNMP) and I'm reviewing the documentation and requirements. One thing I'm not entirely sure about: Can I install SC4SNMP (Docker container) on the same machine where I already have my Intermediate Forwarder, or would it be better to run it on the Deployment Server?
Thanks a lot for your reply! For log collection, SC4S looks like a great fit — we'll definitely look into it. That said, we’re also interested in the infrastructure-level monitoring of our network ... See more...
Thanks a lot for your reply! For log collection, SC4S looks like a great fit — we'll definitely look into it. That said, we’re also interested in the infrastructure-level monitoring of our network devices — things like interface status, bandwidth usage, CPU load, etc. In this case, is it possible (or recommended) to use SNMP with Splunk? If so, are there supported solutions or best practices for integrating SNMP metrics into Splunk in an agentless way? Any advice or experience would be greatly appreciated!