Splunk Search

Question on stash files or sourcetype=stash

RNB
Path Finder

I am running Splunk version 4.2.1.

I have a saved search that runs nightly. This was one of my first queries in Splunk, so there is likly room for improvement. I have a Perl script that executes later in the morning to create a report using the .csv file. I use the collect statement to index the search data, but the stash file has been filling the disk of late. The resulting tash file has been as large as 20Gb.

(%ASA-4-106023 OR %ASA-4-733100 OR %ASA-3-710003 OR %ASA-6-106015 OR %ASA-6-106006 OR %ASA-6-725006 OR "invalid user" OR "% Failed User Login" OR "%AAA-W-REJECT" OR "%EMWEB-1-LOGIN_FAILED:" OR "Authent. Failure:" OR %ASA-4-106023 OR %ASA-4-733100 OR %ASA-3-710003 OR %ASA-6-106015 OR %ASA-6-106006 OR "unable to connect" OR "%SNMP-W-SNMPAUTHFAIL" NOT (search OR 106023 OR 106015 OR "Topology Change" OR "%ASA-4-733100" OR "%ASA-3-710003" OR igmp ))| collect | dedup _time | sort -host _time | fields _raw | outputcsv singlefile=true loginfails.csv

It looks like the collect statement is duplicating data. For example, 21 unique login failure attempts has generated 4,273,831 events. We are quite certain that the switch in question has not sent 4.2M syslog events to Splunk.

To prove this, I have created a simple query looking for login failures for a specific user for a specific day.

username AND REJECTED

I end up with 4,273,831 matching events, most are at the exact same second in time. When I modify the query to the following, I get 21 matching events.

username AND REJECTED | dedup _time

What is common in both these queries is that the sourcetype=stash and index=summary. While this query was executing, I did not see any stash files in $SPLUNK_HOME/var/spool/splunk where I normally see them.

If I change the query to "index=main username AND REJECTED" or "sourcetype=udp:514 username AND REJECTED", I get only 1 matching event.

I cannot reconcile that "username AND REJECTED | dedup _time" produces 21 results and that
"index=main username AND REJECTED" produces only 1 result. I am also wondering if there is a bug in Splunk that would cause the summary index to fill with 4,273,831 matching events.

Any help unravelling this would be appreciated.

Tags (1)
0 Karma

jrodman
Splunk Employee
Splunk Employee

It looks like you have a search that is looking for uncommon events and then is using collect manually (not recommended) to put those events into the default index.

This is probably duplicating events every time it is running.

Some missing info is the time range that this search is bounded to. I'm suspicious that it's running over all time and thus duplicating the matching events over all time every time it is running, thus more than doubling the work with every invocation.

I recommend disabling the search, or simply removing the collect from the search, as I don't see what it could be doing for you.


As for your reconciliation question, I'm not sure what's happening exactly. You may have to work with support to figure it out. However we know that we have trouble with extremely large numbers of results for one source in the same second. This is because our backing store is sorted by second, but our UI output is expected to sort by subsecond as well, so if we have to sort 4 million events by a non-indexed field, it's going to go to heck. This is a very rare case in real data, so it didn't get to the top of the prioritization pile, although it's starting to get worked on now.

0 Karma
Get Updates on the Splunk Community!

.conf24 | Day 0

Hello Splunk Community! My name is Chris, and I'm based in Canberra, Australia's capital, and I travelled for ...

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

 (view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...