I was wondering what happens to data from scripts and logs if and when a machine is either so heavily loaded it can't get enough cycles to run the process or machine lost network connectivity.
We are HPC shop, and our machine will sometime become unresponsive when running high load jobs.
Will log data just que up somewhere and get sent to the index eventually?
Splunk will stop seeking forward in log files it is monitoring, and once the connection is restored or resources become available it will pick back up where it left off. Data that is being dropped into spool will queue. Socket inputs will start to block and there is a small queue, but otherwise they will eventually begin to drop, so it depends on the queueing mechanisms upstream and downstream as to how they will handle congestion. If Splunk cannot forward data, scripted input data will not be captured.
Splunk will stop seeking forward in log files it is monitoring, and once the connection is restored or resources become available it will pick back up where it left off. Data that is being dropped into spool will queue. Socket inputs will start to block and there is a small queue, but otherwise they will eventually begin to drop, so it depends on the queueing mechanisms upstream and downstream as to how they will handle congestion. If Splunk cannot forward data, scripted input data will not be captured.
@araitz , My question is similar to the above but I would like to know what happens to logging in below scenarios when there is an outage. I would like to know if splunk restores logging when the systems are out of outage or does it lose the logs.
1. logs are getting forwarded from an app
2. synced from an S3 bucket
3. pulled via API
4. data coming through heavy forwarder
Thanks in Advance