Deployment Architecture

Splunk is down and data monitoring at that down time

Rohit
Engager

Hi All,  I am new to Splunk and joined this community seeking help.

Request you to please help me getting my doubts clear.

My Question is

1. When my Splunk is down for an hour, and if i get any adhoc request to get the data for that hour period, so once Splunk is up then what we need to do  (restart splunk forwarder?) to restore data or data will be restored by itself  or data will be lost.

2. What to do/where to check at instance level when i am unable to see latest log files/data in splunk

3. What to do if log files are missing in splunk forwarder after patching,  how to add files or what is the correct approach

 

 

0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @Rohit,

your question is just a little vague:

are you speaking of Splunk infrastructure (Indexers and Search Heads) used to store data and run searches, or about the input phase (Forwarders?

if Forwarders, are you speaking of syslogs or logs ingested by Universal Forwarders?

Anyway, if you're speaking Search Heads, when they are down, searches don't run but logs are indexed, when the system reboots they restart to run and it's possible to run specific searches for the down period; to avoid this Splunk created Search Head Cluster.

If you're speaking of Indexers, searches don't run and new data aren't indexed, but when they restart, old data from Forwarders are indexed, so it's possible to run searches on the down period, also in this case, you can use Indexed Cluster to avoid downtime.

If you're speaking of Forwarders (Heavy or Universal), if Indexers are down, they save locally logs and send them when the connection restarted.

The only situation for loosing data is the ingestion of syslogs if you don't have at least two Forwarders and a Load Balancer.

To have an alert when a data flow stopped, you can run a simple search on data depending on the data frequency.

For the third question, as I said you can loose data only if you have syslogs and you don't have at least two Forwarders and a Load Balancer, otherwise, you can read the data when the system restarted.

The only attention is to analyze data flows to understand what's the correct frequency and correct timeframe of execution of searches, e.g. if your infrastructure can have at max one hour of stop of a system, you could use for your searches e.g. earliest=-2h latest=-h, in this way eventual downtimes aren't relevant.

Ciao.

Giuseppe

View solution in original post

Rohit
Engager

Thanks a lot @gcusello  for your response. I got the answer.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Rohit,

your question is just a little vague:

are you speaking of Splunk infrastructure (Indexers and Search Heads) used to store data and run searches, or about the input phase (Forwarders?

if Forwarders, are you speaking of syslogs or logs ingested by Universal Forwarders?

Anyway, if you're speaking Search Heads, when they are down, searches don't run but logs are indexed, when the system reboots they restart to run and it's possible to run specific searches for the down period; to avoid this Splunk created Search Head Cluster.

If you're speaking of Indexers, searches don't run and new data aren't indexed, but when they restart, old data from Forwarders are indexed, so it's possible to run searches on the down period, also in this case, you can use Indexed Cluster to avoid downtime.

If you're speaking of Forwarders (Heavy or Universal), if Indexers are down, they save locally logs and send them when the connection restarted.

The only situation for loosing data is the ingestion of syslogs if you don't have at least two Forwarders and a Load Balancer.

To have an alert when a data flow stopped, you can run a simple search on data depending on the data frequency.

For the third question, as I said you can loose data only if you have syslogs and you don't have at least two Forwarders and a Load Balancer, otherwise, you can read the data when the system restarted.

The only attention is to analyze data flows to understand what's the correct frequency and correct timeframe of execution of searches, e.g. if your infrastructure can have at max one hour of stop of a system, you could use for your searches e.g. earliest=-2h latest=-h, in this way eventual downtimes aren't relevant.

Ciao.

Giuseppe

Get Updates on the Splunk Community!

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...

New Dates, New City: Save the Date for .conf25!

Wake up, babe! New .conf25 dates AND location just dropped!! That's right, this year, .conf25 is taking place ...

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...