Splunk Search

How to create an event every second

clorne
Communicator

Hello,

I have a set of data occurring randomly and I would like to have an event every second.
I am able to get that when I work with one single file and the following search:

 timechart cont=true span=1s values(field1) as fields2

==> an event is created each second with empty fields2 when needed.

When I use this search on several files, it is not working anymore; I guess that Splunk works globally on all events and not by file anymore and I want to work source by source.

Then I tried to work with bucket:

bucket _time span=1s | stats values(field1) as values(fields2) by _time

==> no additional event is created.

At this time I have defined a custom search command to create the desired additional events, but I would like to do it with Splunk search language.

Regards

0 Karma
1 Solution

clorne
Communicator

Hello

Here is the code that adds events every second when necessary and do not add event when there is a change of source.
There is one field names EpochRoundTime that stores the time in an integer format.

import splunk.Intersplunk as si

current_time=0
prev_time=0
storesource=""
storeevent=[]

//read the results into a variable
(results, dummyresults, settings) = si.getOrganizedResults()


//look into the set of results to identify if it is needed to add event
for i in range(len(results)):
    current_time=int(results[i]["EpochRoundTime"])

# check if we are working on the same source or if this is a new source (file)
if(storesource != results[i]["source"]):
    prev_time=0

storesource = results[i]["source"]

#for the first event of each source, do nothing
if(prev_time!=0):
    Stationadress = results[i-1]["StationAdress"]
    #if there is the need of an additionnal event, store it in a list
    if(current_time>prev_time+1):
        checktime= current_time-prev_time-1
        for j in range(0,current_time-prev_time-1):
            event={"EpochRoundTime":prev_time+1+j,"StationAdress":Stationadress,"VMEDLS_raw":"FAKE", "source":results[i]["source"]}
            storeevent.append(event)

prev_time= current_time

// Here all event have been prepared to be added in the results set
for i in range(len(storeevent)):
    results.append(index, storeevent[i])

//return the results back to Splunk 
si.outputResults(results)

View solution in original post

0 Karma

clorne
Communicator

Hello

Here is the code that adds events every second when necessary and do not add event when there is a change of source.
There is one field names EpochRoundTime that stores the time in an integer format.

import splunk.Intersplunk as si

current_time=0
prev_time=0
storesource=""
storeevent=[]

//read the results into a variable
(results, dummyresults, settings) = si.getOrganizedResults()


//look into the set of results to identify if it is needed to add event
for i in range(len(results)):
    current_time=int(results[i]["EpochRoundTime"])

# check if we are working on the same source or if this is a new source (file)
if(storesource != results[i]["source"]):
    prev_time=0

storesource = results[i]["source"]

#for the first event of each source, do nothing
if(prev_time!=0):
    Stationadress = results[i-1]["StationAdress"]
    #if there is the need of an additionnal event, store it in a list
    if(current_time>prev_time+1):
        checktime= current_time-prev_time-1
        for j in range(0,current_time-prev_time-1):
            event={"EpochRoundTime":prev_time+1+j,"StationAdress":Stationadress,"VMEDLS_raw":"FAKE", "source":results[i]["source"]}
            storeevent.append(event)

prev_time= current_time

// Here all event have been prepared to be added in the results set
for i in range(len(storeevent)):
    results.append(index, storeevent[i])

//return the results back to Splunk 
si.outputResults(results)
0 Karma

clorne
Communicator

Hello,
As a conclusion, I will keep my custom command, because I want additionnal new events per source but I don't want the continuity between two different source.
But the two command are interesting

Thanks again.

0 Karma

woodcock
Esteemed Legend

Describe your custom command and then click "Answer" so that people can learn about your solution.

0 Karma

clorne
Communicator

Hello Woodcock and Jeffland,
Sorry for my late reply, yesterday was off in France.

Well I think that both commands timechart and bucket_time are working. My issue may be somewhere else.
When Splunk runs one of these functions (timechart or stats), at the beginning only the values(fields1) is display and during the "finalizing" step, the new events are displayed.
Until Timechart or stats do not reach the finalizing step, I can not see my additionnal events.
I think that my issue with several files is that Splunk never reach the finalizing step because there are too many new events.

I have the feeling that additionnal events are created also between the different sources.
If source1 has events between 01-01-2000 01:00:00 till 01-01-2000 02:00:00 and source2 has events beginning on 05-01-2000 01:00:00, till 05-01-2000 05:00:00 then events will be created every second also between 01-01-2000 02:00:00 and 05-01-2000 01:00:00 creating too much events.
I need more tests on smaller files to check that.

Regards

0 Karma

clorne
Communicator

Hello again,
After further test:
timechart cont=true span=1s values(field1) AS fields2 BY source => with the "By source" The fields2 remains empty. Is it really possible to use a BY clause with timechart ??

Otherwise I confirm that my issue is that additionnal events are created "between" the two source. Sometimes I get an error from Slpunk sometimes, Splunkd stops and I find some exception indication in the log files.

So, do you think that there is a way to perform a continuous chart without adding events after the last events and before the first event, that means break the continuity between two sources ?

0 Karma

woodcock
Esteemed Legend

Yes, of course events will be created: 1 for every second, for every source, regardless of whether any actually source events exist there; that's what timechart does!

0 Karma

jeffland
SplunkTrust
SplunkTrust

You might be looking for makecontinuous:

bucket _time span=1s | stats values(field1) as values(fields2) by _time | makecontinuous

See docs here.

But I must admit I don't see why woodcock's suggestion (splitting by the field that contains info on your source file) doesn't work for you.

woodcock
Esteemed Legend

Because timechart was desgined to produce nice line charts for visualization, it produces a _time (x-axis) value for every second. As you have discovered, all bucket does is round (replace) the _time value for each event to the span specified; it does not create events. You could check out eventgen but it would probably be much easier to figure out why timechart is not working. Does this not work for you?

| timechart cont=true span=1s values(field1) AS fields2 BY source
0 Karma

clorne
Communicator

Hello Woodcock and thanks for your reply
I am using Splunk to perform analyse on multiple files that have their own time reference. Let's say that each file begin a 0s and ends at 5 minutes
When I use the timechart command on one single file, It is ok, there is an event per second.

With several files, I think that Splunk is sorting all the events by time;
If there is no event at 2s in file1 but there is an event at 2s in file2, no new event will be created.
And my goal is to have an additionnal event for each file separately.

Regards

0 Karma

woodcock
Esteemed Legend

That is exactly what my command does; did you try it?

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...