Hi all.
I have built a simple scripted input that grabs XML data over http:
#!/bin/bash
curl http://www.a.com/EN.XML
All works fine BUT Splunk is indexing all events each time it is pinging the file, resulting in duplicate events.
What is the best way to validate the index of events in Splunk against the XML file, so that Splunk only pulls back events that have not already been indexed?
Thanks!
The best (and possibly only) way would be to implement this logic in your script. Splunk doesn't have that kind of ability to compare incoming data to what's already in the index.
My suggested approach would be for you to edit your script so it keeps the last version of the XML file, and when you issue the next request you compare the data you get from that with what's in the previous version.
The best (and possibly only) way would be to implement this logic in your script. Splunk doesn't have that kind of ability to compare incoming data to what's already in the index.
My suggested approach would be for you to edit your script so it keeps the last version of the XML file, and when you issue the next request you compare the data you get from that with what's in the previous version.
Thought so (was hoping I could cheat) 🙂
Thanks for your help!