Splunk Search

How can I identify field extractions that are causing performance problems?

Contributor

Is there a log configuration option that will have splunkd logging when poorly written field extractions are impacting search performance? (or is there some other option to use a Splunk search to identify extraction related performance issues?)

0 Karma

Splunk Employee
Splunk Employee

Sounds like you already know field extraction is the issue, and which search is slow and have performance issue. And, you would like to know which field extraction is causing performance issue for the search.

Most likely log you get help is search.log in your search artifact. So, enabling DEBUG log ( $SPLUNK_HOME/etc/log-searchprocess.cfg) might help. But, usually reading through the debug log cannot really help except for you see some error or warn message. So, identifying which field extraction is causing performance issue is usually difficult to be identified by a log file.

I would try to break it down to pieces and see when the performance issue happens.

If you're not sure if field extraction is causing the search performance issue, this troubleshooting would be more complicated, indexers, buckets, event volume, memory, search commands types, etc, etc.

0 Karma

Legend

You could use the job inspection to examine your search.
You can find it under the Process button.
Everyway I suggest to verfy disk i/o that should be at least 800 iops, usually this is the Splunk performances problem, I used bonnie++.
Bye.
Giuseppe

0 Karma

Contributor

Thank you. Yes, I use the job inspector often. I was more interested in identification of bad extractions for searches that others are running in the hopes of mitigating issues before they affect other users. Like maybe something in splunkd.log that I could use to alert us when a bad extraction is negatively impacting search performance.

0 Karma