Hi Splunker;
The syslog server store any logs coming to it by syslog on files as .log file then Splunk read this logs from this file and store the logs, when this file start to full Splunk convert this file to .gz file to available the space for restore another logs on .log file another time.
Can I make (/opt/syslog) path 80% used not 100% to avoid such alerts and focus on real issues.
Best Regards;
Abdullah Al-Habbash
You can't use logrotate to move data once the filesystem is at x GB in size.
You can write a shell script that uses a combination of df -h, grep, awk, mv, gzip, etc. though. I doubt anyone here is going to write that for you though.
You should try a Linux forum, not a splunk forum.
You can't use logrotate to move data once the filesystem is at x GB in size.
You can write a shell script that uses a combination of df -h, grep, awk, mv, gzip, etc. though. I doubt anyone here is going to write that for you though.
You should try a Linux forum, not a splunk forum.
This is not a Splunk problem. The .gz files are created by Linux utilities, not by Splunk. You must employ other Linux utilities (perhaps Logrotate) to ensure disk space does not become 100% utilized.
There is 0 correct way to use logrotate with syslog AND splunk. It just wasn't designed for the chore.
Better to write your own script for log rotation. those who have found "success" with this have data loss and don't know it.
@jkat54 interesting that you would say that. Could you please give us some more details and references ?
I may have jumped the gun here. Every implementation I've seen had a step to restart syslog.
I suppose if you told logrotate to only rotate files seen more than once AND you make syslog write files with date time stamps, you could have success. But in my experience there were file handles open everywhere, racing conditions between splunk monitor and syslog and logrotate, etc.
I've never seen one setup with all three that wasn't dropping data at some point or causing other unforeseen issues.
I just steer clear, my opinion I suppose.
Yeah totally agree with you on this : "in my experience there were file handles open everywhere, racing conditions between splunk monitor and syslog and logrotate, etc." ... and the lower the time interval is on the rotation the more chance data loss could occur.
Under /etc/logrotate.d/splunk on syslog server I have the below configuration:
/data/syslog/network////.log
/data/syslog/network/////.log
/data/syslog/security////.log
/data/syslog/security/////.log
/data/syslog/security//////.log {
daily
rotate 1
compress
missingok
notifempty
nocreate
postrotate
systemctl reload-or-restart rsyslog.service
systemctl reset-failed rsyslog.service
endscript
}
And I have size for /opt/syslog partition 296G.
How can change this configuration when this partition arrived 200G for make the logrotate?
appreciate your support in that?
You can use the size
option :
https://linux.die.net/man/8/logrotate