Given a specified n number of csvs, I want to input and append them:
| makeresults | eval count=mvrange(0,n,1) | mvexpand count | eval filename=strftime(relative_time(now(),"-"+tostring(count)+"mon"),"directory\\%Y-%m.csv")
| map maxsearches=n search="| inputcsv $filename$"
But this does not work. It caps each csv file at 10000 lines whereas outside of mapping, inputcsv will load the full csv. Any suggestions? My file sizes are under 3MB so it is kind of crazy this does not work in Splunk.
mapping:
search="|inputcsv start=0 file.csv | append [|inputcsv start=10000 file.csv]"
It does not fix the problem either, It still only brings in the first 10k lines for each file.
Solved. Change maxout under [subsearch] in etc/sytem/local/limits.conf. Not sure what the long-term negative ramifications of this are.
Solved. Change maxout under [subsearch] in etc/sytem/local/limits.conf. Not sure what the long-term negative ramifications of this are.
I also ran into this problem where this subsearch was capping results at 10000 behind the scenes:
| makeresults | map maxsearches=999999 search="search index=\"accesscontrol\" earliest=1555800000 latest=1556400000" | ....
and adjusting maxout fixed it.
(Lol I had the same problem again, Googled it, found this SA page, and was like tyvm whoever this helpful poster is!! Then realized it was me... )
Problem stated in another way:
| makeresults | map maxsearches=1 search="| inputcsv \"file.csv\"" | search <ITEM AT LINE 9999>
brings back results, but
| makeresults | map maxsearches=1 search="| inputcsv \"file.csv\"" | search <ITEM AT LINE 10001>
does not.