Reporting

How to map inputcsv on files over 10000 lines?

Motivator

Given a specified n number of csvs, I want to input and append them:

| makeresults | eval count=mvrange(0,n,1) | mvexpand count | eval filename=strftime(relative_time(now(),"-"+tostring(count)+"mon"),"directory\\%Y-%m.csv")
| map maxsearches=n search="| inputcsv $filename$"

But this does not work. It caps each csv file at 10000 lines whereas outside of mapping, inputcsv will load the full csv. Any suggestions? My file sizes are under 3MB so it is kind of crazy this does not work in Splunk.

mapping:

search="|inputcsv start=0 file.csv | append [|inputcsv start=10000 file.csv]" 

It does not fix the problem either, It still only brings in the first 10k lines for each file.

1 Solution

Motivator

Solved. Change maxout under [subsearch] in etc/sytem/local/limits.conf. Not sure what the long-term negative ramifications of this are.

View solution in original post

Motivator

Solved. Change maxout under [subsearch] in etc/sytem/local/limits.conf. Not sure what the long-term negative ramifications of this are.

View solution in original post

Motivator

I also ran into this problem where this subsearch was capping results at 10000 behind the scenes:

| makeresults | map maxsearches=999999 search="search index=\"accesscontrol\" earliest=1555800000 latest=1556400000" | ....

and adjusting maxout fixed it.

(Lol I had the same problem again, Googled it, found this SA page, and was like tyvm whoever this helpful poster is!! Then realized it was me... )

0 Karma

Motivator

Problem stated in another way:

| makeresults | map maxsearches=1 search="| inputcsv \"file.csv\"" | search <ITEM AT LINE 9999>

brings back results, but

| makeresults | map maxsearches=1 search="| inputcsv \"file.csv\"" | search <ITEM AT LINE 10001>

does not.

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!