Reporting

saved search and collect

Path Finder

Did i mess something or just compeletly don't understand what collect does. Below is may saved search and conf file, it returns results, it saved as report, it is scheduled seach, and it runs. however it is not sending the the index. I am woudnering if i simple do not understand what collect is meat for or did i miss something. Thank you for your time and help!

index="18009" sourcetype="cisco:asa" host="10.0.0.1" src_ip!="10.0.0.*" Built
| lookup dshield cidr as src_ip OUTPUTNEW cidr as src_cidr 
| where src_cidr!="NONE"
| iplocation src_ip 
| lookup dnslookup clientip as src_ip OUTPUTNEW clienthost as src_host 
| lookup cidrv4toasn network as src_ip OUTPUTNEW autonomous_system_number AS ASN
| eval src_host=if(src_host!="",src_host,"No PTR")
| eval _raw="tifid=000001 host="+host+" source="+source+" sourcetype="+sourcetype+" city="+City+" region="+Region+" country="+Country+" src_host="+src_host+" src_asn="+ASN+" msg="+_raw
| addinfo
| collect index="threat_intel"

[dshield hits]
action.email.useNSSubject = 1
action.summary_index._name = threat-intel
action.summary_index.report = "DShield Hits"
alert.track = 0
cron_schedule = */15 * * * *
dispatch.earliest_time = -15m
dispatch.latest_time = now
display.visualizations.charting.chart = bar
display.visualizations.show = 0
enableSched = 1
request.ui_dispatch_app = search
request.ui_dispatch_view = search
schedule_window = auto
search = index="18009" sourcetype="cisco:asa" host="10.0.0.1" src_ip!="10.0.0.*" Built\
| lookup dshield cidr as src_ip OUTPUTNEW cidr as src_cidr \
| where src_cidr!="NONE"\
| iplocation src_ip \
| lookup dnslookup clientip as src_ip OUTPUTNEW clienthost as src_host \
| lookup cidrv4toasn network as src_ip OUTPUTNEW autonomous_system_number AS ASN\
| eval src_host=if(src_host!="",src_host,"No PTR")\
| eval _raw="tifid=000001 host="+host+" source="+source+" sourcetype="+sourcetype+" city="+City+" region="+Region+" country="+Country+" src_host="+src_host+" src_asn="+ASN+" msg="+_raw\
| addinfo\
| collect index="threat_intel"
0 Karma
1 Solution

Path Finder

niketnilay commad got me where i needed to go!

avatar image niketnilay twinspop · yesterday 1
For collect command index="" should be sufficient. However, there are some other considerations as well. You have mentioned that scheduled search is running but have you validated data? If your stats field does not have _time field you will have to create one. Following is just an example your use case might differ (essentially, you just need to come up with 15 min buckets to create just one summary row every 15 min)

| addinfo
| eval _time = info_min_time
| bin span=15m

Also, you should run query for time span which has already completed to avoid duplicate and ensure that all required data is already indexed. Which implies your cron schedule should not be same as earliest and latest.

cronschedule = */15 * * * *
dispatch.earliest
time = -30m
dispatch.latest_time = -15m
In order to test your collect command you can run the same in test mode directly in search (also change the index to some dummy test index).

| collect testmode=true index="summarythreatintel_test"
Ideally, you should have picked summaries, but seems like you are trying to move data from one index to another with custom fields. Validate that all fields have values as expected. Check the count of events being migrated every 15m. Also after execution of your search see whether there is an error logged in Splunk's _internal index.
Refer to documentation for collect command (parameters and use cases): http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Collect

View solution in original post

0 Karma

Path Finder

niketnilay commad got me where i needed to go!

avatar image niketnilay twinspop · yesterday 1
For collect command index="" should be sufficient. However, there are some other considerations as well. You have mentioned that scheduled search is running but have you validated data? If your stats field does not have _time field you will have to create one. Following is just an example your use case might differ (essentially, you just need to come up with 15 min buckets to create just one summary row every 15 min)

| addinfo
| eval _time = info_min_time
| bin span=15m

Also, you should run query for time span which has already completed to avoid duplicate and ensure that all required data is already indexed. Which implies your cron schedule should not be same as earliest and latest.

cronschedule = */15 * * * *
dispatch.earliest
time = -30m
dispatch.latest_time = -15m
In order to test your collect command you can run the same in test mode directly in search (also change the index to some dummy test index).

| collect testmode=true index="summarythreatintel_test"
Ideally, you should have picked summaries, but seems like you are trying to move data from one index to another with custom fields. Validate that all fields have values as expected. Check the count of events being migrated every 15m. Also after execution of your search see whether there is an error logged in Splunk's _internal index.
Refer to documentation for collect command (parameters and use cases): http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Collect

View solution in original post

0 Karma

SplunkTrust
SplunkTrust

@asucrews... Good to hear that the suggestion worked for you 🙂

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

Influencer

If you're in a distributed environment, have you configured the search head running this to send logs to your indexers?

Path Finder

not a distributed environment

0 Karma

SplunkTrust
SplunkTrust

For collect command index="" should be sufficient. However, there are some other considerations as well. You have mentioned that scheduled search is running but have you validated data? If your stats field does not have _time field you will have to create one. Following is just an example your use case might differ (essentially, you just need to come up with 15 min buckets to create just one summary row every 15 min)

   | addinfo
   | eval _time = info_min_time
   | bin span=15m

Also, you should run query for time span which has already completed to avoid duplicate and ensure that all required data is already indexed. Which implies your cron schedule should not be same as earliest and latest.

 cron_schedule = */15 * * * *
 dispatch.earliest_time = -30m
 dispatch.latest_time = -15m

In order to test your collect command you can run the same in test mode directly in search (also change the index to some dummy test index).

| collect testmode=true index="summary_threat_intel_test"

Ideally, you should have picked summaries, but seems like you are trying to move data from one index to another with custom fields. Validate that all fields have values as expected. Check the count of events being migrated every 15m. Also after execution of your search see whether there is an error logged in Splunk's _internal index.
Refer to documentation for collect command (parameters and use cases): http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Collect

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"

Path Finder

I would love to use auto summaries however the fields need to add that a missing from 6.6.1 and i don't know the need changes to the conf.

0 Karma

Champion

Does the index already exist?

Also, is there a reason you want to use collect instead of a scheduled search with summary indexing enabled?

Path Finder

i tried that at first but it wasnt sending to my index

0 Karma