Installation

After upgrading to 6.5.0, why are there so many Timeliner issues in the search.log?

rroberts
Splunk Employee
Splunk Employee

It seems to be taking much longer for dispatched search request to finalize. I see in search.log a lot of POST requests from Timeliner. Is Timeliner new in 6.5.0?

10-25-2016 23:01:47.210 INFO  Timeliner -  Sending POST request 'https://10.0.0.88:8089/services/search/remote/splunk02/1477436470.3/events?offset=231976&count=689'
10-25-2016 23:01:47.415 INFO  Timeliner - https://10.0.0.88:8089 No work units available, moving to idle
10-25-2016 23:01:47.452 INFO  Timeliner - Stats[10.0.0.88] 16 / 117.00 / 297.00 / 2879.00 / 558025.00 / 49.99 / 850457
10-25-2016 23:01:47.452 INFO  Timeliner - Stats[10.0.0.99] 16 / 58.00 / 198.00 / 2179.00 / 362025.00 / 63.87 / 384405
10-25-2016 23:01:47.452 INFO  Timeliner - Stats[Collections] 16 / 275.00 / 2872.00 / 24850.00 / 49518646.00 / 826.27 / 76817158
10-25-2016 23:01:47.459 INFO  Timeliner - Stats[10.0.0.88] 1 / 93.00 / 0.00 / 93.00 / 8649.00 / 0.00 / 554549
10-25-2016 23:01:47.459 INFO  Timeliner - Stats[Collections] 2 / 0.00 / 111.00 / 111.00 / 12321.00 / 55.50 / 291866418

sloshburch
Splunk Employee
Splunk Employee

To clarify, you see those are "INFO" events, right? Not "ERROR" or "WARN". Just want to make sure you don't go after a red herring.

0 Karma

arowsell_splunk
Splunk Employee
Splunk Employee

Hi,

Sorry my response is a little late but I have just recently come across the same issue. As I understand if remote_timeline is enabled (default option) in limits.conf then timeline will be built remotely. This is not a new behavior with 6.5.0.

I have seen that increasing "max_chunk_queue_size" in the limits.conf file can improve search improvement. It has by default increased from 1MB to 10MB between versions 6.3.x and 6.5.x. You may like to increase this further.

Also, the following parameters can also effect this behavior:

fetch_remote_search_log = disabledSavedSearches
remote_timeline = true
remote_timeline_fetchall = 1

Setting them as follows has been seen to have a positive impact on the search speed:

fetch_remote_search_log = false
remote_timeline = true
remote_timeline_fetchall = false

However, there may be other reasons for the searches not finalizing but without reviewing the full log files it is difficult to say.

I was also wondering if you manage to resolve the issue and what steps you took?

0 Karma

jhedgpeth
Path Finder

Any comment on what "No work units available, moving to idle" actually means?
What functionality would I lose if I applied your changes?

0 Karma

a212830
Champion

Curious myself... seeing a lot of those messages.

0 Karma
Get Updates on the Splunk Community!

What's New in Splunk Cloud Platform 9.2.2403?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.2.2403! Analysts can ...

Stay Connected: Your Guide to July and August Tech Talks, Office Hours, and Webinars!

Dive into our sizzling summer lineup for July and August Community Office Hours and Tech Talks. Scroll down to ...

Edge Processor Scaling, Energy & Manufacturing Use Cases, and More New Articles on ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...