We configured the input interval to 3000.
After 10 minutes the collection job stops, but not all build plans are loaded.
If the collection job started again we git the following notification in /opt/splunk/var/log/splunk/libs_bamboo.log:
2018-10-10 16:56:56,965 INFO pid=25059 tid=MainThread file=bamboo_service.py:make_url_request_obj:24 | req https://server:443/rest/api/latest/result/DEMO-SAM1-53.json?max-results=1000
2018-10-10 16:56:57,076 INFO pid=25059 tid=MainThread file=bamboo.py:collect_events:294 | Looking at detailed result
2018-10-10 16:56:57,077 INFO pid=25059 tid=MainThread file=bamboo.py:collect_events:297 | Build complete date:2018-06-29T13:22:24.933+02:00 Last sync date:2018-10-10 16:50:34+0000
2018-10-10 16:56:57,077 INFO pid=25059 tid=MainThread file=bamboo.py:collect_events:303 | Not writing this event because it is already indexed
But this is not true, cause the bamboo.py script checks only the Last sync date against the Build complete date without checking if the last job got all old build plans.
The problem is the timestamp used in bamboo.py.
The last sync date is stored in the kvstore without the timezone and is stored as an „offset-naive“ object.
If your local timezone isn't GMT you will get problems.
So we have to change the following lines in the bamboo.py:
line 98
current_checkpoint = datetime.now(tzlocal())
line 101
< current_checkpoint, '%Y-%m-%d %H:%M:%S'))
current_checkpoint, '%Y-%m-%d %H:%M:%S%z'))
line 233
< current_checkpoint = datetime.now()
current_checkpoint = datetime.now(tzlocal())
line 272
< last_synced_tz = last_synced + strftime("%z", gmtime())
last_synced_tz = str(last_synced)
line 307
< current_checkpoint, '%Y-%m-%d %H:%M:%S'))
current_checkpoint, '%Y-%m-%d %H:%M:%S%z'))
This will fix the timestamp issue.
Before restarting the bamboo import, you have to delete the KV Store Lookups / ta_bamboo_checkpointer entries:
bamboo_last_synced
bamboo_sync_status