Getting Data In

Heavy forwarder internal logs to splunk cloud

bhsakarchourasi
Path Finder

Hi All,

Hope you all are doing well.

I ran into a issue that heavy fowarders are not sending internal logs to Splunk cloud. Except internal logs we can see all the reporting device logs on cloud. On HFs itself there is nothing I can see while running "index=_internal" but on back end I can see the splunkd, matrics logs are getting update.

I suspect that the cloud package (splunkclouduf.spl) for forwarding logs is configured with exception.

I found some links for how to forward internal logs to indexers but not getting any similar thread.

Thanks.

Tags (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi bhsakarchourasiya_acct,
check if you have a large traffic, because all forwarders (Heavy and Universal) have a default value of 256 kb/s of bandwidth occupation and internal logs are sent after the others.
You should also check if the _internal logs are late sent to Cloud or not send, you can check this with a simple search

index=_internal
| eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S.%3N"), diff=_time-_indextime
| table _time indextime diff

Anyway, you could set a greater value for maxKBps in limits.conf of your Heavy Forwarders.

Ciao.
Giuseppe

View solution in original post

0 Karma

tbavarva
Path Finder

Hi bhsakarchourasiya_acct,

Did you check permission of splunkd.log, metrics.log and other log files whether splunk user has permission to read all of them?

Regards,
Tejas

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi bhsakarchourasiya_acct,
check if you have a large traffic, because all forwarders (Heavy and Universal) have a default value of 256 kb/s of bandwidth occupation and internal logs are sent after the others.
You should also check if the _internal logs are late sent to Cloud or not send, you can check this with a simple search

index=_internal
| eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S.%3N"), diff=_time-_indextime
| table _time indextime diff

Anyway, you could set a greater value for maxKBps in limits.conf of your Heavy Forwarders.

Ciao.
Giuseppe

0 Karma
Get Updates on the Splunk Community!

Shape the Future of Splunk: Join the Product Research Lab!

Join the Splunk Product Research Lab and connect with us in the Slack channel #product-research-lab to get ...

Auto-Injector for Everything Else: Making OpenTelemetry Truly Universal

You might have seen Splunk’s recent announcement about donating the OpenTelemetry Injector to the ...

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...