I would like to write a query which will start with
starttime=06/08/2018:00:00:00 endtime=06/08/2018:00:01:00 index=* ... and then take
endtime as parameters... and create an epoch time in the result.
basically every 1 minute I plan to execute
starttime=06/08/2018:00:00:00 endtime=06/08/2018:00:01:00 index=* ... starttime=06/08/2018:00:00:00 endtime=06/08/2018:00:02:00 index=* ... starttime=06/08/2018:00:00:00 endtime=06/08/2018:00:03:00 index=* ... ...
and I want to get something as a table like
1533686460,1 1533686520,1 1533686580,1 ...
You could either use
eval starttime=strptime(starttime,"%m/%d/%Y:%H:%M:%S") or just
eval start_time=starttime to get the epoch. Similarly for endtime
I have tried both but 0 events get returned from either:
starttime=07/01/2018:00:00:00 endtime=07/01/2018:00:01:00 eval starttime=starttime | table starttime
starttime=07/01/2018:00:00:00 endtime=07/01/2018:00:01:00 eval starttime=strptime(starttime,"%m/%d/%Y:%H:%M:%S") | table starttime
I'm using splunk 6.5
Are you getting events for your existing search ie.
starttime=06/08/2018:00:00:00 endtime=06/08/2018:00:01:00 index=. Can you post a sample event result?
yes definitely I do get results for:
starttime=06/08/2018:00:00:00 endtime=06/08/2018:00:01:00 index=*
(put * after index)
I cannot post a sample event result.
Do the queries I put above work for your splunk instance?
I dont have events for these dates but the below dummy search works for me .
|makeresults|eval starttime="06/08/2018:00:00:00" |eval endtime="06/08/2018:00:01:00"|eval start_time=strptime(starttime,"%m/%d/%Y:%H:%M:%S")
Why would you run a search every minute to look for the last minute? This would be both very wasteful and does not account for forwarding pipeline latency (a typical average latency from when the event happened to when it gets indexed is ~250 seconds, which is longer than 60 seconds). Let's back up and tell us what data you have (SHOW SAMPLE EVENTS) and explain what you are trying to achieve (forget about SPL for now).
my intention is to copy events out of splunk into some other store. I would like to periodically run a query and copy the splunk data somewhere else.
certain cases the splunk instance is down / times out queries / events show up later then indexing time...
usually i could have gotten let's say every hour results and appended to the exported dataset the results. but I do want to upsert. in the upsert key I want to use the starttime
anyways seems like starttime / endtime are very special parameters which cannot be used in the table being created