Reporting

Is there a way to recover data that occurred during a time where splunk ran out of space and accounts had been disabled?

Aburenheide
Engager

Wondering if there is anyway to recover data that is not reporting within splunk on any alert or dashboard during a time period that splunk had ran out of space and accounts that had ownership became disabled?  space issue has been fixed and alerts and dashboards have been given different ownership.  splunk forwarder is running on all computers.

Basically when we run an of our alerts and dashboards we don't get any events between the dates of 4/15 and 4/27.   Event logs on all computers have events for the time period but splunk isn't pulling them.

0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

As @gcusello already hinted - there is no simple general answer. It all depends on the sources and methods used to ingest data.

One border case is the UDP-based syslog where the sending party doesn't even know on any layer if the packet was received or not. So obviously there is no transmission control, buffering and so on.

The other end of the spectrum would be reading from the files which just lie there and wait to be read.

There may be other inputs which have their own limitations.

So you'd have to review your inputs.

And there is also the matter of queueing events within the forwarder. On the other hand - I know that forwarder keeps messages in queues (and throttles inputs if possible) in case of transmission problems (indexer unreachable, ssl problems and so on). I'm not sure how it behaves when indexer actively refuses to receive the events. Can anyone elaborate?

View solution in original post

0 Karma

Aburenheide
Engager

Thanks guys. 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

As @gcusello already hinted - there is no simple general answer. It all depends on the sources and methods used to ingest data.

One border case is the UDP-based syslog where the sending party doesn't even know on any layer if the packet was received or not. So obviously there is no transmission control, buffering and so on.

The other end of the spectrum would be reading from the files which just lie there and wait to be read.

There may be other inputs which have their own limitations.

So you'd have to review your inputs.

And there is also the matter of queueing events within the forwarder. On the other hand - I know that forwarder keeps messages in queues (and throttles inputs if possible) in case of transmission problems (indexer unreachable, ssl problems and so on). I'm not sure how it behaves when indexer actively refuses to receive the events. Can anyone elaborate?

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Aburenheide,

if you had a blocked indexing for two weeks, surely your forwarders saved a part of these logs in their cache but it's limited so not all the events during stop were saved, and surely these events are already sent to the Indexers, you can check this matching the indexing stop date with the avalilable data.

To recove the lost data isn't so easy:

if you have to source files, you could manually reindex them but there's the risk of duplicated values, if you have syslogs (not from file) or HEC they are lost.

To manually reindex the missed logs from the source files, you have two choices:

  • you can get them in using the Splunk GUI and manually adding sourcetypes, host and index informations,
  • you can put them in a temp folder and create a temporary input to ingest them using crcSalt 0 =<SOUCE> option.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Fastest way to demo Observability

I’ve been having a lot of fun learning about Kubernetes and Observability. I set myself an interesting ...

September Community Champions: A Shoutout to Our Contributors!

As we close the books on another fantastic month, we want to take a moment to celebrate the people who are the ...

Splunk Decoded: Service Maps vs Service Analyzer Tree View vs Flow Maps

It’s Monday morning, and your phone is buzzing with alert escalations – your customer-facing portal is running ...