Splunk Search

Wrong sorting

iKate
Builder

Hi everyone,

We met a problem with sorting data in a table. We sort users id-s, let's say there's 350000 of them, its format is 1 - 350000. The results are in a table wraped in paginator.

The problem is when trying to sort by user_id by default means of splunk web-interface (triangles near the param name), its last value in descending order is 300000 and then goes 300001, 300002, etc, instead of 350000, 349999, etc.

To check it by yourself, for example create csv file(test.csv) with one column of numbers 1-350000 (e.g. with a header "P"), add this file in lookups, and run the command:

| inputlookup test.csv | table P | sort 0 num(P) P

or just

| inputlookup test.csv | table P  

and click triangles

P.S. I didn't manage to find any proper limits that can affect in limits.conf.

Tags (2)
0 Karma
1 Solution

iKate
Builder

As @bmacias84 suggested
helped changing this limit:
[searchresults]
maxresultsrows =

By default we had 100000 and turned to 400000. Whether it's coincidence or not but we experience downgrading of system performance though.

View solution in original post

0 Karma

iKate
Builder

As @bmacias84 suggested
helped changing this limit:
[searchresults]
maxresultsrows =

By default we had 100000 and turned to 400000. Whether it's coincidence or not but we experience downgrading of system performance though.

0 Karma

bmacias84
Champion

@iKate, If your results where truncated I was thinking the following settings.


[searchresults]
maxresultsrows =
[lookup]
max_memtable_bytes =

I generated a 8MB test file with and still no problem. Even though the results are sorted incorrectly do you still recieve all your results or are they truncated?

0 Karma

bmacias84
Champion

@iKate, that is not a coincidence going 8x time the default possibily would have performance impacts. I would try to find a happpy median.

  • This limit should not exceed 50000. Setting this limit higher than 50000 causes instability.

If you are not currently memory constrained try increasing and playing with max_mem_usage_mb under global.

0 Karma

iKate
Builder

@bmacias84, thanks, I've checked default and my user's limits with btool and found no differences. And what limits can conflict in my test case? If it can help I can send these limits lists, just need an email.
Also I run the same inputlookup commands with csv file from admin user - the same wrong sorting.

We're using Splunk 4.3.4 build 136012. This testing csv-file is 3Mb in size.

0 Karma

bmacias84
Champion

@iKate, I am unable to replicate your problem. I created a csv file with 425064 rows. What version of Splunk are you using, how large in MB is your lookup, are your results truncated? You might have a conflicting limit settings.

To find system defaults for limits and comparing against your user and app.

./splunk btool --debug limits list
./splunk btool --debug --user=admin limits list
./splunk btool --debug --app=nix limits list

0 Karma
Get Updates on the Splunk Community!

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...