My environment : splunk stand-alone ver7.1.4
*I found same phenomenon in ver7.1.3
I executed search below by using two lookup tables
.(*I attached them to this page.)
| inputlookup test_lookup_2.csv
| lookup test_lookup_1.csv key OUTPUT service as a
| mvexpand a
| eval a=if(a=service, null(), a)
| eval _time=now()
| transaction row_num
| stats count by a
Then, when I search it in fast mode
and smart mode
, Splunk returns No results found.
But when I search it in verbose mode
, splunk returns results normally!
Also, if I add | noop search_optimization = false
to the last line of this and do a search, the result will be returned normally regardless of the search mode!
Why is this difference caused?
This behavior is too weird as specification, so I think that it is issue.
If someone know about it, please tell me.
HI, this look like some edge case of projection elimination by SPL optimizer with the transaction command and _time
Please open a case support and ref SPL-143274 which look similar.
you can also workaround by adding a | fields * after the transaction command or add in limits.conf
[search_optimization::projection_elimination]
enabled = false
and reproduce without lookup with
| makeresults | fields - _time | eval a=mvrange(1,10,1),row_num=1,test=2 ,_time=now()| transaction row_num| stats count by a
HI, this look like some edge case of projection elimination by SPL optimizer with the transaction command and _time
Please open a case support and ref SPL-143274 which look similar.
you can also workaround by adding a | fields * after the transaction command or add in limits.conf
[search_optimization::projection_elimination]
enabled = false
and reproduce without lookup with
| makeresults | fields - _time | eval a=mvrange(1,10,1),row_num=1,test=2 ,_time=now()| transaction row_num| stats count by a
Thank you for answer!
I will open a case to support team.
This is the reason to avoid using the transaction
command entirely. It precludes any map-reduce optimization (no work is done on the indexer tier) and pulls ALL events back to the Search Head, where it has to do a tremendous amount of work. When using transaction
even at moderate scale, it will use up all available RAM on the Search Head at which point the Search Head will abort the search in the middle and finalize partial results (or sometimes crash) and present them to you WITH OUT ANY OBVIOUS WARNING that an OOM-abort happened. You can see this inside the job inspector and in the _*
logs. When it comes to transaction
, just don't. There is almost always another scaleable way to do it with a stats
command.
So each level of field-generation and presentation-mode causes more RAM to be used before you even get to the transaction
command which means that the transaction
command will not get as far before it goes OOM. But even in fast
, I am reasonably sure that you are still going OOM, but you get a little further down the road before you do. Check the job inspector for sure.
Thank you for comment.
But my environment isn't distributed configuration.