I have this report with this query but I am only getting partial results, it was ran once and from start of year to current date. this index would probably end up with around ~100 million logs. did the report get stopped because of the sheer volume of this report?
(host=pnr-proxy-prod* OR host=master*.menlosecurity.com* OR host=pnr-webui-prod*)
source=**
(level=* OR "error:" OR "warn:" OR "[warn]" OR "WARNING" )
|bucket _time span=1d
|eval no_event= if((isnull(event) AND (level="ERROR" OR level="WARNING")) ,_raw,null())
|rex field=host "^(master-|safemail-)?(.*-prod-)?(?[0-9-]+[0-9])"
|table _time, level,event, source,ms_version, no_event
|collect index=summary source=summary_all_events
That could be the case. If this is going to be a regular scheduled summary indexing (e.g. daily/hourly) why not using summary indexing backfill to summarize historical data. Also, are you missing any kind of aggregation command here or you want to put all events to summary index?
Im not familiar with using the splunk CLI and might not have access to that . yeah i just need to put all events into an index because one of my dashboard stopped running because it was having a hard time processing the large volume of data in the past few days.
Since you don't have access to CLI than you should run teh above search but divide the total time in different buckets, e.g. instead of running the query Year to Date, run for 1 week at a time.
So you're basically just collecting all Warning/Error logs and putting it on one index/source for faster searching? Are you searching for the same in all indexes/sources/sourcetypes?
yup basically just doing it for faster search and yea im searching in all sources because I might as well create a summary index that can also helped with current and future dashboards