I should have updated this a while ago.
I was able to do this simply with a field extraction and by adding a field with an eval statement in the query.
A different thread suggested this stanza in /opt/splunk/etc/system/local/props.conf
[slow_query]
LINE_BREAKER = (?:;|Argument)(\n)(?:\# Time: |\# User@Host: )
SHOULD_LINEMERGE = false
TRUNCATE = 0
I added one field extraction for my slow_query source type to get all of the relevant fields:
[^\[\n]*\[(?P<DB_User>[^\]]+)[^\[\n]*\[(?P<DB_Host>[^\]]+).*\n# Query_time: (?P<Query_time>\d*\.\d*) Lock_time: (?P<Lock_time>\d*.\d*) Rows_sent: (?P<Rows_sent>\d*) Rows_examined: (?P<Rows_examined>\d*).*SET timestamp\=\d+\;(?P<SQL_STATEMENT>.*)\;
In the search bar I used this:
sourcetype="slow_query"| eval Norm_sql=replace(SQL_STATEMENT, "(\d+|\'.*?\')", "XXXX")
This chart adds up all of the time the database spent on each query in the slow log and ranks them. If your database server can handle the load, you could set the slow query time down to 0 to see everything, but it will definitely hurt performance.
sourcetype="slow_query"| eval Norm_sql=replace(SQL_STATEMENT, "(\d+|\'.*?\')", "XXXX") | stats sum(Query_time) by Norm_sql | sort sum(Query_time) desc
... View more