- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you help me accelerate a dataset that has streaming commands?

xanthakita
Path Finder
09-07-2018
07:28 AM
I am trying to accelerate a dataset I created.. and it tells me I can’t because it has streaming commands.
I’m not sure if there is some better way to accelerate this dataset so its faster for general searches.
Here is the query that builds the dataset:
index=netcool_noi_1 sourcetype=netcool:policylogger netcool_serial=*
| eval unassigned="FALSE"
| eval enriched="FALSE"
| eval correlated="FALSE"
| search reporting_results=*
| rex field=reporting_results "NODE:\s+(?\S+)\s+"
| rex field=_raw "SERVER_SERIAL\:\s+(?\d+)"
| rex field=_raw "REPORTING RESULTS: ENRICHED WITH PARENT CIRCUIT ID FROM PLUCK:\s+(?\S+\s+\S+\s+\S+)\s+"
| rex field=_raw "REPORTING RESULTS: ENRICHED WITH CIRCUIT ID FROM RESOLVE MSS DATA FOR NODE:.*CIRCUIT ID:\s+(?.*)\s+RATE\s+"
| rex field=_raw "REPORTING RESULTS: (?\S+)\s+"
| eval enriched=if(in("ENRICHED", testfield), "TRUE", enriched)
| eval unassigned=if(like(reporting_results,"%UNASSIGNED%"), "TRUE", "FALSE")
| eval correlated=if(in("CORRELATED", testfield), "TRUE", correlated)
| transaction netcool_serial maxevents=7 keeporphans=1 keepevicted=1 mvlist=(enriched, correlated, unassigned)
| eval unassigned=if(in("TRUE", unassigned), "TRUE", "FALSE")
| eval enriched=if(((in("TRUE", enriched) OR (len(parentCircuitId)>=0)) AND (unassigned="FALSE")), "TRUE", "FALSE")
| eval correlated=if(in("TRUE", correlated), "TRUE", "FALSE")
| eval parentfound=if(len(parentCircuitId)>=0, "TRUE", "FALSE")
Any suggestions?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

xanthakita
Path Finder
09-10-2018
07:54 AM
thank you @mstjohn_splunk for putting my code into a code block. I intended to do that and got drug away to another emergency. Now if someone jsut has some insight on a better way to build this dataset so it can be accelerated
