Hi All, I want a SPL query to get total size occupied/consumed by each index till now since the date of onboarding and the remaining space available for each index.. And also please provide a...
See more...
Hi All, I want a SPL query to get total size occupied/consumed by each index till now since the date of onboarding and the remaining space available for each index.. And also please provide a query which can help me to get the expected results if i have searched for one hour/one month/one year it should return the results since the date of onboarding to till now.. Thanks, Srinivasulu S
I have searched for the events using both new and old sourcetypes and date range of (All Time) to no avail. I am positive I didn't delete anything manually.
@doniaelansasy Splunk doesn’t remove events unless explicitly told to (e.g., via the delete command adding to your user the role "can_delete" you make the deleted data unsearchable, but they remai...
See more...
@doniaelansasy Splunk doesn’t remove events unless explicitly told to (e.g., via the delete command adding to your user the role "can_delete" you make the deleted data unsearchable, but they remain in the buckets. If instead you use the CLI command or you modify index retention, data will be physically removed.). Check the Original Sourcetype: index=<your_index> earliest=-7d latest=now This will show all events in the index, regardless of sourcetype. Look for your "missing" events and note their sourcetype. It’s probably the original one you had before the change. Search explicitly for the old and new sourcetypes index=<your_index> sourcetype=<old_sourcetype> index=<your_index> sourcetype=<new_sourcetype> This will confirm whether the old events are still there and whether new events are coming in under the updated sourcetype. Preventing Future Issues Before pushing changes via the deployment server, test the stanza in a non-production Splunk instance. Simulate the data input and verify the results.
I’ve encountered an issue while working on a configuration for a Splunk deployment. I was creating a stanza in the inputs.conf file within an app that would be pushed to multiple clients via the depl...
See more...
I’ve encountered an issue while working on a configuration for a Splunk deployment. I was creating a stanza in the inputs.conf file within an app that would be pushed to multiple clients via the deployment server. The goal was to retrieve specific data across multiple clients. However, I noticed that the data retrieval wasn't working as expected. While troubleshooting the issue, I made several changes to the stanza, including tweaking key values. In the process, I tried to change the source type in the stanza. Unfortunately, after making this change, all the events that had already been indexed and retrieved vanished. I'm looking for guidance on how to recover the missing events or if there’s any way to prevent this in the future when modifying the source type in inputs.conf. Any insights or suggestions on how to address this would be greatly appreciated! Thank you in advance for your help!
I have created one app to send the logs to 3rd party, i have created transofrms.conf, props.conf and outputs.conf to apply changes, our requirement is to send the raw logs in RFC 3164 format to the ...
See more...
I have created one app to send the logs to 3rd party, i have created transofrms.conf, props.conf and outputs.conf to apply changes, our requirement is to send the raw logs in RFC 3164 format to the 3rd party, but when i apply the transforms and props.conf according to the source type there is no change in the output, so may I know what is wrong.
Hi @chenfan , Ok, there's no requirement on the license. Anyway, the choose of install from scratch a new instance is a strange approach because you said to have a structured architecture with many...
See more...
Hi @chenfan , Ok, there's no requirement on the license. Anyway, the choose of install from scratch a new instance is a strange approach because you said to have a structured architecture with many servers and componente: in other words you want to start a new infrastructure, I'm not sure that you save time creating from scratch the same infrastructure and copying all the apps and configurations, but it's your choose! Ciao. Giuseppe
one tip: do search for error and ssl term: i had mistyped path to cert tail -f /opt/splunk/var/log/splunk/splunkd.log | grep -i 'error' | grep -i 'ssl' forwarder did not created ssl session...
See more...
one tip: do search for error and ssl term: i had mistyped path to cert tail -f /opt/splunk/var/log/splunk/splunkd.log | grep -i 'error' | grep -i 'ssl' forwarder did not created ssl sessions attemp, but sent data with out ssl
Hi @samalchow Please let us know if you are getting _internal logs for the UFs not sending the windows data as this might help determine if its a Windows permissions issue on the collection of even...
See more...
Hi @samalchow Please let us know if you are getting _internal logs for the UFs not sending the windows data as this might help determine if its a Windows permissions issue on the collection of event data, or an issue sending Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @SN1 If you look further back, when was the last event? Have a look using this search looking back at least to the time of the last event from the missing servers. | tstats latest(_time) as _...
See more...
Hi @SN1 If you look further back, when was the last event? Have a look using this search looking back at least to the time of the last event from the missing servers. | tstats latest(_time) as _time where index=_introspection by host Then run the search 5-10 minutes later. Are the times of the last events different for the missing host? If so this would suggest that they are having issues sending logs and that they are delayed, rather than not sending at all. In addition it would be worth checking the Splunk log of the missing host directly, check out $SPLUNK_HOME/var/log/splunk/splunkd.log - are there any references to blocking or output errors? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @splunkkk Are you still getting other logs / _internal logs from the HF? This will help determine if the error is with sending or receiving data. Check the $SPLUNK_HOME/var/log/splunk/splunkd.l...
See more...
Hi @splunkkk Are you still getting other logs / _internal logs from the HF? This will help determine if the error is with sending or receiving data. Check the $SPLUNK_HOME/var/log/splunk/splunkd.log for any errors relating to SSL/TLS/input/output/queues Use netcat to check the expected port is open (nc -vz -w1 localhost <port>) - This assume netcat is installed and as "nc" binary. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@splunkkk Firstly, could you kindly confirm whether your Syslog forwarder is receiving the network logs? You can verify this by running a tcpdump capture. To check for devices from which logs are...
See more...
@splunkkk Firstly, could you kindly confirm whether your Syslog forwarder is receiving the network logs? You can verify this by running a tcpdump capture. To check for devices from which logs are not being received, please use the following command: sudo tcpdump -i <interface> host <device_IP> and port <port_number> Replace <interface>, <device_IP>, and <port_number> with the appropriate values for your environment. Find Interface Names tcpdump -D To capture traffic for a specific host (e.g., 192.168.1.50): sudo tcpdump -i ens160 host 192.168.1.50 ( change your interface here ) To capture traffic on a specific port (e.g., 514): sudo tcpdump -i ens160 port 515
Hi @kiran_panchavat Firewall rules are in place, nobody has make changes to it. Rsyslog is running on HF and disk space should be enough as it can still receive some network devices log on the sam...
See more...
Hi @kiran_panchavat Firewall rules are in place, nobody has make changes to it. Rsyslog is running on HF and disk space should be enough as it can still receive some network devices log on the same HF Any idea what else I can check? Thanks
@splunkkk Ensure no firewall rules or network policies have changed recently that might block traffic (e.g., port 514 or your custom syslog port). Ensure rsyslog is running on the HF (systemctl...
See more...
@splunkkk Ensure no firewall rules or network policies have changed recently that might block traffic (e.g., port 514 or your custom syslog port). Ensure rsyslog is running on the HF (systemctl status rsyslog or service rsyslog status). Check the disk space on the Syslog forwarder. command:- df -h Verify whether any queues are blocked on the heavy forwarder by running: tail -n 100 /opt/splunk/var/log/splunk/metrics.log | grep -i "blocked=true"
Hi, @gcusello Considering various factors, we have decided to directly deploy a new Splunk Enterprise 9.3.X instance. Can we directly deploy the License file to the new instance?
@SN1 There should be a message in splunkd.log explaining the problem. index=_internal source=*splunkd.log Check that there is enough storage on the volume containing the introspection i...
See more...
@SN1 There should be a message in splunkd.log explaining the problem. index=_internal source=*splunkd.log Check that there is enough storage on the volume containing the introspection index. Also, confirm no one turned off introspection. See https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/ConfigurePIF#Disable_logging If the missing hosts haven’t reported data recently, they might not appear depending on the default time range (e.g., last 24 hours). Expand the time range in the UI or add earliest=-30d (or further back) to your search
@SN1 The _introspection index in Splunk is part of the "Platform Instrumentation" features, which collect information about your systems running Splunk to help diagnose performance issues. What d...
See more...
@SN1 The _introspection index in Splunk is part of the "Platform Instrumentation" features, which collect information about your systems running Splunk to help diagnose performance issues. What does platform instrumentation log? - Splunk Documentation Introspection endpoint descriptions - Splunk Documentation
Hi. Recently I notice that the splunk heavy forwarder has stop receiving logs from network devices. We are using TLS over syslog, but the cert is not expired yet. The rsyslog.conf file should be not...
See more...
Hi. Recently I notice that the splunk heavy forwarder has stop receiving logs from network devices. We are using TLS over syslog, but the cert is not expired yet. The rsyslog.conf file should be nothing wrong since previously it can receive logs. Can I know why is it happening?
@SN1 If you run this search, how many peers return count? index=_internal earliest=-5m@m | stats count by splunk_server This should give responses from all your indexers, and if you have your SH ...
See more...
@SN1 If you run this search, how many peers return count? index=_internal earliest=-5m@m | stats count by splunk_server This should give responses from all your indexers, and if you have your SH / Component boxes configured to forward their internal logs, those also.
i am getting this error on health check Root Cause(s): Events from tracker.log have not been seen for the last 238401 seconds, which is more than the red threshold (210 seconds). This typical...
See more...
i am getting this error on health check Root Cause(s): Events from tracker.log have not been seen for the last 238401 seconds, which is more than the red threshold (210 seconds). This typically occurs when indexing or forwarding are falling behind or are blocked.
@SN1 Check if the missing indexer and search head are online and Splunk is running on them. You can SSH into those servers and run splunk status to verify. Are you able to see all the instanc...
See more...
@SN1 Check if the missing indexer and search head are online and Splunk is running on them. You can SSH into those servers and run splunk status to verify. Are you able to see all the instances in the Monitoring console? This could happen if: The hosts are down or disconnected. The Splunk instance on those hosts is not running. There’s a network issue preventing data from being forwarded.