I've been troubleshooting an issue for some time now that is proving pretty difficult to resolve. My goal is to change the contents of a particular lookup file we use, which a lot of different searches and field extractions use. The actual columns themselves are not changing at all, they should be exactly the same number, name, and format. The only thing different about the "newer" lookup file is that there should be more rows. Not that many new rows though, we're talking 4200 vs 4800 really.
The problem I'm running into is that when I write the new lookup file (it's just a csv that uses a lookup definition) all of our indexers spike to 100% CPU Utilization and stay there until I revert back to the old version of the file (then they drop down to hardly any utilization at all). I've tried both overwriting the current file, or just creating a new file and pointing the lookup definition to use that file, and both produce the same results. I've gone over the differences of data between the two versions of the lookup file and really just am not coming up with anything that different between the two.
Has anyone ever ran into something like this? I'm just wondering what else I can look into to try and resolve this, currently I am pretty stumped. We are running Splunk version 6.4.0
After hammering away I was able to resolve this. Turns out, it wasn't such a straightforward problem. We have a bunch of monitors running that were horribly optimized, and the new data in the lookup tables caused those horribly optimized monitory search queries to start pulling back more results. I had to go through each user's queries and optimize them in order to resolve this issue.