Dashboards & Visualizations

Forwarder slow - dashboard fix

deepak02
Path Finder

Hi,

We have a Splunk Enterprise setup in which we have 12 forwarders. Some of these forwarders are slow in sending data to the indexers (i.e. some of the logs are getting sent around 30 mins later).

Therefore when we calculate stats for dashboards every 20 mins, the numbers are not correct.

We do not have any options of upgrading the forwarders.

Is there anyway we can change the dashboard so that we display the stats at the latest time when all the forwarders logs were available?

Example scenario:

forwarders 1-12 send data at 9.20
forwarders 1-8 send data at: 9.30
dashboard query runs at: 09:45 (time interval is 'last 20 mins' i.e. 9.25 to 9.45)
forwarders 9-12 send data at 09:55

When the dashboard runs at 09:45, I want it to use time interval 09:00 to 09:20 since this is the latest time when all the forwarder logs were available.

Thanks,
Deepak

Tags (1)
0 Karma
1 Solution

woodcock
Esteemed Legend

Use the metadata command to see what the most recent timestamp is across all of your hosts, set latest to that and set earliest back from that and parameterize an outer search with a subsearch, like this:

index=_* OR index=* [|metadata type=hosts | stats max(recentTime) AS latest | eval search="earliest=" . relative_time(latest, "-1h") . " latest=" . latest | table search]

View solution in original post

0 Karma

woodcock
Esteemed Legend

Use the metadata command to see what the most recent timestamp is across all of your hosts, set latest to that and set earliest back from that and parameterize an outer search with a subsearch, like this:

index=_* OR index=* [|metadata type=hosts | stats max(recentTime) AS latest | eval search="earliest=" . relative_time(latest, "-1h") . " latest=" . latest | table search]
0 Karma

deepak02
Path Finder

Thankyou. How do I modify the below query?

application="UserLogin" (index="Production-UserLogin") env="Production" type=* sourcetype=userLogin:Performance (splunk_server=splunk02)
| eval count=1
| timechart per_minute(count) as trans_per_min
| stats max(trans_per_min) as "Transactions-per-minute"

0 Karma

woodcock
Esteemed Legend

Do it EXACTLY like this:

index="Production-UserLogin"  [|metadata type=hosts | stats max(recentTime) AS latest | eval search="earliest=" . relative_time(latest, "-1h") . " latest=" . latest | table search] application="UserLogin" env="Production" type=* sourcetype=userLogin:Performance (splunk_server=splunk02)
| eval count=1 
| timechart per_minute(count) as trans_per_min
| stats max(trans_per_min) as "Transactions-per-minute"
0 Karma

deepak02
Path Finder

Thankyou woodcock.

I do not want the '1hour' to be hardcoded.

What I need is this:
The dashboard query should use the data from a time when the indexer received data from all the forwarders. This could be an hour back, 2 hours back or a day back.

Currently, 1 or 2 of my forwarders are sending logs a little late. Therefore the numbers that come up on the dashboard are incomplete.

I want the dashboard to display only accurate numbers (i.e. calculated from complete data), even if this means the data is an hour or 2 or a day old.

0 Karma

woodcock
Esteemed Legend

You are not getting the function of my search. It finds the time that the latest event showed up and uses that for latest and then sets earliest to be 1-hour before that. I made a guess at what you might like ( the 1h ) but the solution is there in concept, just pick another value.

0 Karma

mattymo
Splunk Employee
Splunk Employee

i would suggest you look at tuning the forwarders. They may simply need more thruput in your limits.conf. Usually when they fall behind it can be because they can't send to the indexers fast enough.

https://answers.splunk.com/answers/53138/maximum-traffic-of-a-universal-forwarder.html

- MattyMo
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi deepak02,
you could use a different time period in your dashboard's searchs: e.g. earliest=-45m@m latest=-25m@m.

Otherwise you could schedule the searches of each panel in separate scheduled reports (e.g. every 5 minutes) and shot report's results in your dashboards (this solution is very quick in display).

Anyway I suggest to verify why your logs arrive late: maybe there are some network problems or you should think to a different way to ingest these logs.

Bye.
Giuseppe

0 Karma
Get Updates on the Splunk Community!

Enhance Security Visibility with Splunk Enterprise Security 7.1 through Threat ...

(view in My Videos)Struggling with alert fatigue, lack of context, and prioritization around security ...

Troubleshooting the OpenTelemetry Collector

  In this tech talk, you’ll learn how to troubleshoot the OpenTelemetry collector - from checking the ...

Adoption of Infrastructure Monitoring at Splunk

  Splunk's Growth Engineering team showcases one of their first Splunk product adoption-Splunk Infrastructure ...