Reporting

Why are only some events collected in an index when a report runs with the collect command?

syx093
Communicator

I am new to splunk so please forgive me if my terminology is off. I have a report that runs every minute. In the search of that report, I end it with the following command.

|collect index=cactus_summary

This should collect the results from that search and create as an event. Then I should be able to access events from that search. However, when I search for index=cactus_summary I only see only some of those events. For example, if I schedule the report to run every minute and I wait 15 minutes, I should be able to see 15 events when I search for index=cactus_summary. However, I see 5-7 events all occurring at different increments of minutes (e.g. event 1 occurs at 3:30, event 2 occurs at 3.33, event 3 occurs at 3:43).

0 Karma

stephanefotso
Motivator

Hello! I don't know how is your search query, but notice that, schedule the report to run every minutedoes not means, you will get 1 event per minute. Just because,

  1. Based on the search criteria, you could have situations where none events are collected into your new index. For example a search like this: index=_internal "error"|collect index=error_index , will send events into error_index, only when the word "error" is found in the _internal index.
  2. The search takes some time to run before provide events, depending of the complexity of your query, and your computer performances. Means, a search can take more than 1 minute to run, before provide the first event.

Thanks

SGF
0 Karma

syx093
Communicator

I looked at my query and I don't think it's excluding events based on events not matching certain criteria. So it must be the performance of the computer. I ran the query and it takes at least 10 seconds to run and I don't think a 10 second search will cause some events not to show. Thanks anyways. I think I will create a dashboard with just the original search and not try to be fancy.

0 Karma

stephanefotso
Motivator

Hello! Can i see all your search query?

SGF
0 Karma

syx093
Communicator

host=schroeder index=os sourcetype=ps COMMAND=udpub |dedup PID|stats count AS udpub_cnt by host

|appendcols [ search host=schroeder earliest=-2m index=os sourcetype=ps COMMAND=aimglog |dedup PID|stats count AS aimglog_cnt by host]
|appendcols [ search host=schroeder earliest=-2m index=os sourcetype=ps COMMAND=bimglog|dedup PID|stats count AS bimglog_cnt by host]
|appendcols [ search host=schroeder earliest=-2m index=os sourcetype=ps COMMAND=smm |dedup PID|stats count AS smm_cnt by host]
|appendcols [ search host=schroeder earliest=-2m index=os sourcetype=ps COMMAND=sm |dedup PID|stats count AS sm_cnt by host]
|appendcols [ search host=schroeder earliest=-2m index=os sourcetype=ps COMMAND=sbcs |dedup PID|stats count AS sbcs_cnt by host]
|appendcols [ search host=schroeder earliest=-2m index=os sourcetype=ps COMMAND=syncd "_"|dedup PID|stats count AS syncd_cnt by host]
|appendcols [ search host=schroeder earliest=-2m index=os sourcetype=ps COMMAND=cleanupd |dedup PID|stats count AS cleanupd_cnt by host]

| eval udpub_status=case(udpub_cnt==8,"OK",udpub_cnt!=8,"NOT OK")
| eval smm_status=case(smm_cnt==1,"OK",smm_cnt!=1,"NOT OK")
|eval sm_status=case(sm_cnt==1,"OK",sm_cnt!=1,"NOT OK")

| eval sbcs_status=case(sbcs_cnt==1,"OK",sbcs_cnt!=1,"NOT OK")
|eval syncd_status=case(syncd_cnt==2,"OK",syncd_cnt!=2,"NOT OK")
| eval cleanupd_status=case(cleanupd_cnt==1,"OK",cleanupd_cnt!=1,"NOT OK")
| eval aimglog_status=case(aimglog_cnt==4,"OK",aimglog_cnt!=4,"NOT OK")
| eval bimglog_status=case(bimglog_cnt==4,"OK",bimglog_cnt!=4,"NOT OK")
|fields host *status *cnt |collect index=cactus_summary

0 Karma
Get Updates on the Splunk Community!

Splunk Observability for AI

Don’t miss out on an exciting Tech Talk on Splunk Observability for AI!Discover how Splunk’s agentic AI ...

Splunk Enterprise Security 8.x: The Essential Upgrade for Threat Detection, ...

Watch On Demand the Tech Talk on November 6 at 11AM PT, and empower your SOC to reach new heights! Duration: ...

Splunk Observability as Code: From Zero to Dashboard

For the details on what Self-Service Observability and Observability as Code is, we have some awesome content ...