We have Windows and Linux BIND DNS servers logging into one index in Splunk. Because of the way Windows logs domain names in DNS requests we are doing a search time extraction. If I want to search both types of DNS logs for any lookups for www.splunk.com we do a search this way:
It works but is very inefficient because of the search time extraction on win_query. What I would like to do is create a new index and populate it with the unique values from each of those fields daily, deduping between fields of course. I have been researching and am not certain that a summary index is what I want. We basically want to search months worth of DNS logs to see if a domain shows up or not. At that point we dont need the actual log event, just that it exists or not. Is it possible to take unique values from 2 different fields and populate a new index with those values? Other suggestions?
This is a lot of work but it is worth it, you will have a solid Splunk installation that can easily fullfill future requirements.
A lot of Work has already been done.
I would only start using a summary index if the amount of events you have to process in one search is too big (if a single search takes too long to complete).
Hope this gets you started
Thanks Chris. I should have clarified a couple things. We do have different sourcetypes for both Windows and Linux DNS logs. We wanted to stay away from search time extractions because they are very expensive resource wise. It takes 15 minutes longer for the same search with search extractions.
I just found a way to use SEDCMD to fix the odd Microsoft format for domain being queried prior to indexing. Once that is done we can search the index without the need for any search time extractions. Hopefully 🙂 Thanks again.