Hello,
There is a splunkforwarder that was stopped for a week without our knowledge and the data from that server was not indexed. Is there a way to retrieve that missing 7 days of data into splunk?
Just restart the forwarder. It will remember where it left off and forward in the missing logs.
Hi @woodcock,
That is what I thought was going to happen. Strangely it only retrieved the logs for the last few hours before it restarted(Restarted at 9 AM 01/14, only got logs from 12 AM 01/13)
Is was reading about manually injecting these missing logs using splunk oneshot but the problem is we have one log file with logs from dates 01/05 - 01/14. If I use onshot, I am suspecting there will be multiple entries and will mess up the report that will be generated using this data.
Please Advice
The issue is there is no time stamp in the log file for the entries. I counted back hours to check on what date the entries started to log. Now if I use oneshot, how will splunk know the date of the entries? I assume this will not work. Please let me know if there is a work around? Thank you
Are you using DATETIME_CONFIG = CURRENT
? How is it timestamping them in the normal case?
This is the configuration I have for this particular source type. This is from props.conf
DATETIME_CONFIG =
NO_BINARY_CHECK = true
category = Custom
pulldown_type = true
SHOULD_LINEMERGE = false
disabled = false
You should probably set a custom datetime.xml
to get the timestamp from the file/name.
So how is splunk setting _time
for your events in the normal case?
just copy the log and trim it and then use oneshot on the modified file.
The issue is there is no time stamp in the log file for the entries. I counted back hours to check on what date the entries started to log. Now if I use oneshot, how will splunk know the date of the entries? I assume this will not work. Please let me know if there is a work around? Thank you
Copy the file. Edit it. Oneshot it. Delete the copy.
If the log files are still there on the servers (with same name/location from where you were monitoring), those would get ingested automatically. If they've been rolled off to different location/name, you could create create a temporary monitoring input with same index/sourcetype and other setting but from different location to ingest those rolled logs. You can also use one shot method. See this for information on oneshot mehtod :
https://docs.splunk.com/Documentation/Splunk/7.2.3/Data/MonitorfilesanddirectoriesusingtheCLI
@somesoni2,
Is was reading about manually injecting these missing logs using splunk oneshot but the problem is we have one log file with logs from dates 01/05 - 01/14. If I use onshot, I am suspecting there will be multiple entries and will mess up the report that will be generated using this data.