I am using eventgen to generate transaction type data, where I create an event in Splunk and then at some point in the future, I create a new event to revoke the initial event. It works like this
However, it seems that as eventgen randomises the replacement file at startup, when the saved search rewrites the outputlookup, eventgen is not aware of that change, so will continue to use the file. This means that the same guid can be potentially reused in (2) and new guids created in (1) are not candidates.
I can see this in the log file
Normalized replacement file /opt/splunk/etc/apps/app/lookups/guids.csv
is there a way to force this to refresh?
TLDR:
You can not force it to use the updated replacement file.
Long answer:
EventGen is designed for large data volume generation. So EventGen will read the replacement file once and cache it for event token replacement. Think if when you generate every event, you read from file and replace it with a random value from the file, it will have unacceptable I/O latency.
If you think it is critical for your requirement, you can make a feature request here: https://github.com/splunk/eventgen/issues/new/choose
Thanks for your feedback.
TLDR:
You can not force it to use the updated replacement file.
Long answer:
EventGen is designed for large data volume generation. So EventGen will read the replacement file once and cache it for event token replacement. Think if when you generate every event, you read from file and replace it with a random value from the file, it will have unacceptable I/O latency.
If you think it is critical for your requirement, you can make a feature request here: https://github.com/splunk/eventgen/issues/new/choose
Thanks for your feedback.
Thanks. That's as I suspected, so no big deal. I am shutting down my instance daily for the night and I don't need nightly data, so I have saved searches that will update the file before shutdown, so will be in place for the next restart.