Hello -
Admitted new guy here,
I have a heavy forwarder sending data from a MySql database table into Splunk once a day. Works great. But now I want to send the data from a 'customer' type table with about 200 rows, and I would like to replace the data every day, rather than append 200 new rows in the index every day.
How is this best accomplished? Tried searching, but I may not even be using the correct terminology.
thanks, a lookup seems to be the answer, that's what I'll go with.
Hi @twanie ,
good for you, see next time!
let us know if we can help you more, or, please, accept one answer for the other people of Community.
Ciao and happy splunking
Giuseppe
P.S.: Karma Points are appreciated by all the contributors 😉
Hi @twanie ,
as @bowesmana said, it isn't possible because Splunk isn't a database: in the index you have day by day all the events, and it's not possible to delete or replace them, in this way you have also the history of your data.
If you want a table that you can replace every day, you could save the results of your query in a lookup that you recreate every day.
Ciao.
Giuseppe
You do not replace data in Splunk - if you ingest it to an index it remains there until it expires. It's a time based storage so every piece of data gets a timestamp that reflects the event creation in some way.
So, every day when you ingest those 200 rows, they will, if setup to do so, have a date stamp of the day they are ingested.
If you only ever search a single day's data you will get the latest data. If you make a very short retention period on the index, the data will age out and disappear after that time.
The alternative is to make those rows a lookup, in which case the data IS replaced, as you can overwrite the lookup, however, creating a lookup and ingesting to an index is not the same process.
What is your use case for this data?