Hi All,
Hope you all are doing well.
I ran into a issue that heavy fowarders are not sending internal logs to Splunk cloud. Except internal logs we can see all the reporting device logs on cloud. On HFs itself there is nothing I can see while running "index=_internal" but on back end I can see the splunkd, matrics logs are getting update.
I suspect that the cloud package (splunkclouduf.spl) for forwarding logs is configured with exception.
I found some links for how to forward internal logs to indexers but not getting any similar thread.
Thanks.
Hi bhsakarchourasiya_acct,
check if you have a large traffic, because all forwarders (Heavy and Universal) have a default value of 256 kb/s of bandwidth occupation and internal logs are sent after the others.
You should also check if the _internal logs are late sent to Cloud or not send, you can check this with a simple search
index=_internal
| eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S.%3N"), diff=_time-_indextime
| table _time indextime diff
Anyway, you could set a greater value for maxKBps in limits.conf of your Heavy Forwarders.
Ciao.
Giuseppe
Hi bhsakarchourasiya_acct,
Did you check permission of splunkd.log, metrics.log and other log files whether splunk user has permission to read all of them?
Regards,
Tejas
Hi bhsakarchourasiya_acct,
check if you have a large traffic, because all forwarders (Heavy and Universal) have a default value of 256 kb/s of bandwidth occupation and internal logs are sent after the others.
You should also check if the _internal logs are late sent to Cloud or not send, you can check this with a simple search
index=_internal
| eval indextime=strftime(_indextime,"%Y-%m-%d %H:%M:%S.%3N"), diff=_time-_indextime
| table _time indextime diff
Anyway, you could set a greater value for maxKBps in limits.conf of your Heavy Forwarders.
Ciao.
Giuseppe