Splunk Search

Is there a way to modify the request frequency of an indexed real-time search?

eroffol
Path Finder

I have indexed real-time searching set up for my app by setting the indexed_realtime_use_by_default = true in limits.conf.

My dashboard has 1 indexed real-time search and it looks like it is generating 1 HTTP GET request per second. Is there a way to modify the request frequency of an indexed real-time search? For example, I would like to have my indexed real-time search only generate 1 HTTP GET request every 10 seconds instead of 1 per second.

I have tried modifying some variables in the limits.conf file, but could not find the right property to achieve this.

0 Karma
1 Solution

jtacy
Builder

Tested this web.conf config on a 6.5.3 search head:

[settings]

job_max_polling_interval = <integer>
* Maximum polling interval for job in miliseconds (ms)
* The default value is 1000
* This is the maximum time wait for fetching results
* The recommended maximum value is 3000

Docs: http://docs.splunk.com/Documentation/Splunk/6.5.3/Admin/Webconf

Setting that to 10000 had the effect you're looking for but it also seemed to make even ordinary searches hang at "Finalizing job" longer than normal. Perhaps that's one reason why Splunk recommends a value of no more than 3000.

That said, I don't see your load balancer being a problem unless it's woefully underpowered (or undernetworked) and already overloaded. If the load balancer is running at layer 4, just shuffling connections, that takes almost no work. If the load balancer is running at layer 7, perhaps to run cookie persistence on a search head cluster, it's doing a little more work but it's still not a big deal. Consider this (admittedly vendor-performed) testing for that specific use case (page 12):
http://www.f5.com/pdf/reports/F5-comparative-performance-report-ADC-2013.pdf
Even the mid-range equipment under test handles hundreds of thousands of 5 KB requests per second.

As long as your Splunk environment can handle the load I suspect everything will be just fine. Have fun!!

View solution in original post

0 Karma

jtacy
Builder

Tested this web.conf config on a 6.5.3 search head:

[settings]

job_max_polling_interval = <integer>
* Maximum polling interval for job in miliseconds (ms)
* The default value is 1000
* This is the maximum time wait for fetching results
* The recommended maximum value is 3000

Docs: http://docs.splunk.com/Documentation/Splunk/6.5.3/Admin/Webconf

Setting that to 10000 had the effect you're looking for but it also seemed to make even ordinary searches hang at "Finalizing job" longer than normal. Perhaps that's one reason why Splunk recommends a value of no more than 3000.

That said, I don't see your load balancer being a problem unless it's woefully underpowered (or undernetworked) and already overloaded. If the load balancer is running at layer 4, just shuffling connections, that takes almost no work. If the load balancer is running at layer 7, perhaps to run cookie persistence on a search head cluster, it's doing a little more work but it's still not a big deal. Consider this (admittedly vendor-performed) testing for that specific use case (page 12):
http://www.f5.com/pdf/reports/F5-comparative-performance-report-ADC-2013.pdf
Even the mid-range equipment under test handles hundreds of thousands of 5 KB requests per second.

As long as your Splunk environment can handle the load I suspect everything will be just fine. Have fun!!

View solution in original post

0 Karma

eroffol
Path Finder

This is exactly what I was looking for and confirmed it works this morning.

I too think our load balancer will be handle the additional traffic from Splunk without issue. I think some of our leads got a little jumpy when they saw the apache logs filling up much quicker than normal. I don't think it is worth sacrificing performance for log readability. Maybe I will suggest they look into the "grep" command =P

Thank you very much!

0 Karma

DalJeanis
SplunkTrust
SplunkTrust

https://docs.splunk.com/Documentation/Splunk/6.6.3/Search/Aboutrealtimesearches

Looks like this may be the one you want...

[realtime]

indexed_realtime_default_span = <int>
* An indexed realtime search is made up of many component historical searches that by default 
* will span this many seconds. If a component search is not completed in this many seconds the
* next historical search will span the extra seconds. To reduce the overhead of running an 
* indexed realtime search you can change this span to delay longer before starting the next 
* component historical search.
* Precendence: Indexers
* Defaults to 1
0 Karma

eroffol
Path Finder

I thought that also, but I updated it to 30 (in both the app's limits.conf and .../etc/system/local/limits.conf), restarted Splunk, cleared my browser cache (for good measure), and still see 1 HTTP GET request per second.

Here is an example of what the requests look like:

http://<machine name>:8443/splunk/en-US/splunkd/__raw/servicesNS/-/-/search/jobs?id=rt_admin__admin_c21kX3NwbHVuaw__search1_rt_1505845497.11&count=1&output_mode=json&_=150585475554

and then one second later:

http://<machine name>:8443/splunk/en-US/splunkd/__raw/servicesNS/-/-/search/jobs?id=rt_admin__admin_c21kX3NwbHVuaw__search1_rt_1505845497.11&count=1&output_mode=json&_=150585475555

and so on...

Is that just how Splunk works, and a real-time search's request frequency can't be changed?

So the reason that I brought this question up is because there is concern from our company's software leads about the additional traffic that Splunk is creating on our load balancer. Most users will have 3 different dashboards open that have about 10 real-time searches on each of them; therefore, the average user will generate about 30 HTTP GET requests per second. The searches on these dashboards must update at least once every 10 seconds without any user interaction or panel refreshing. While a small number of users is fine, there is uncertainty about how our load balancer will handle a large number of users.

0 Karma
.conf21 CFS Extended through 5/20!

Don't miss your chance
to share your Splunk
wisdom in-person or
virtually at .conf21!

Call for Speakers has
been extended through
Thursday, 5/20!