In Red hat OpenShift on premises cluster i need to collect logs, metrics, and traces of the cluster , when there is no internet connection on the on prime cloud how can i do this ?
Hi @Skv ,
if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until the connection is resumed.
Ciao.
Giuseppe
@gcuselloalso could you explain this in detail - (if the condition of no connectivity is a temporary condition, having an Heavy Forwarder on premise will give you sufficient cache to store logs until the connection is resumed.)
Hi @Skv ,
as I said, Splunk Forwarders (both Universal and Heavy) have a cache mechanism so, if there's no connection with the Indexers, logs are locally stored in the Forwarder until the connection is re-establish.
Information abou how it works and how to configure these persistent queues you can see at https://docs.splunk.com/Documentation/Splunk/latest/Data/Usepersistentqueues .
Ciao.
Giuseppe
Could you please share the script how it can used @gcusello
could you explain the sloution sturcture how it works
when there is no internet for 14hr in the factory data room how the logging and monitoring will work where can be stored the logs when there is no connenction @gcusello
@gcusello could you explain i need to view the logs when the connenction is not there in the factory data room local and once connection up it has to send data to splunl cloud how it can be done
Perhaps you need to look at why the internet connection is down for so long and invest in a more robust network architecture so that the connection is maintained for a higher percentage of the time?
Hi @Skv ,
as I said, Splunk Forwarders can store logs on the local disks if the connection with the Indexers isn't available until the connection will be available again.
It depends on the the availabily di the disk space: e.g. if you know that your systems generate e.g. 1GB/hour, you have to give to your forwarers, for 14 hours, 14 GB of disk so the Forwarder can store all the logs.
Then you should analyze if the connection (when available) is sufficient to send all the 14 GB to the indexers, it depends on the network bandwidth and on the time of network availability.
Ciao.
Giuseppe