We have a dashboard that lists a series of events representing alarms that need to be 'cleared' by the user as non-issues; we have a 'Clear-All'-style button interface that clears multiple events at once matching a given field value, it's implemented in Javascript and triggers a search similar to the one below:
| inputlookup cleared.csv
| append [| makeresults | eval Id="1000" | eval Reason="Low Level" | eval Timestamp=now()]
| append [| makeresults | eval Id="1234" | eval Reason="Low Level" | eval Timestamp=now()]
| append [| makeresults | eval Id="1301" | eval Reason="Low Level" | eval Timestamp=now()]
...
| append [| makeresults | eval Id="1567" | eval Reason="Low Level" | eval Timestamp=now()]
| table Reason,Id,Timestamp | sort Timestamp desc
| outputlookup cleared.csv
i.e. the cleared events have their unique "Id" field appended to a lookup file, which is then used to hide them in the original search.
We've been using this tool successfully for a couple of years now; usually this list of alarms is checked daily and around 20-30 events are cleared simultaneously with a few clicks, however since upgrading to 7.1 we are finding that attempting to clear a large number of alarms causes a hanging behaviour and it can take tens of minutes for the clearing to complete.
Further testing of the search above, with around 30 entries appended to the lookup table, shows that the search can take an extremely long time, over 30 minutes, in Splunk 7.1, while the search runs in 2 seconds in 7.0. Also when it eventually completes the job inspector in 7.1 is erroneously reporting that the search only took seconds. It would be simple enough to manually run the search 30 times with a single 'append' each time, but this would be massive change to the Javascript we had to put together to run the search as it is now.
I've not seen anything in the release notes to suggest what might be causing this. Any one else having similar problems? Should this be reported as a bug?
... View more