Reporting

Is there a way to recover data that occurred during a time where splunk ran out of space and accounts had been disabled?

Aburenheide
Engager

Wondering if there is anyway to recover data that is not reporting within splunk on any alert or dashboard during a time period that splunk had ran out of space and accounts that had ownership became disabled?  space issue has been fixed and alerts and dashboards have been given different ownership.  splunk forwarder is running on all computers.

Basically when we run an of our alerts and dashboards we don't get any events between the dates of 4/15 and 4/27.   Event logs on all computers have events for the time period but splunk isn't pulling them.

0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

As @gcusello already hinted - there is no simple general answer. It all depends on the sources and methods used to ingest data.

One border case is the UDP-based syslog where the sending party doesn't even know on any layer if the packet was received or not. So obviously there is no transmission control, buffering and so on.

The other end of the spectrum would be reading from the files which just lie there and wait to be read.

There may be other inputs which have their own limitations.

So you'd have to review your inputs.

And there is also the matter of queueing events within the forwarder. On the other hand - I know that forwarder keeps messages in queues (and throttles inputs if possible) in case of transmission problems (indexer unreachable, ssl problems and so on). I'm not sure how it behaves when indexer actively refuses to receive the events. Can anyone elaborate?

View solution in original post

0 Karma

Aburenheide
Engager

Thanks guys. 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

As @gcusello already hinted - there is no simple general answer. It all depends on the sources and methods used to ingest data.

One border case is the UDP-based syslog where the sending party doesn't even know on any layer if the packet was received or not. So obviously there is no transmission control, buffering and so on.

The other end of the spectrum would be reading from the files which just lie there and wait to be read.

There may be other inputs which have their own limitations.

So you'd have to review your inputs.

And there is also the matter of queueing events within the forwarder. On the other hand - I know that forwarder keeps messages in queues (and throttles inputs if possible) in case of transmission problems (indexer unreachable, ssl problems and so on). I'm not sure how it behaves when indexer actively refuses to receive the events. Can anyone elaborate?

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @Aburenheide,

if you had a blocked indexing for two weeks, surely your forwarders saved a part of these logs in their cache but it's limited so not all the events during stop were saved, and surely these events are already sent to the Indexers, you can check this matching the indexing stop date with the avalilable data.

To recove the lost data isn't so easy:

if you have to source files, you could manually reindex them but there's the risk of duplicated values, if you have syslogs (not from file) or HEC they are lost.

To manually reindex the missed logs from the source files, you have two choices:

  • you can get them in using the Splunk GUI and manually adding sourcetypes, host and index informations,
  • you can put them in a temp folder and create a temporary input to ingest them using crcSalt 0 =<SOUCE> option.

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Observe and Secure All Apps with Splunk

  Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

Splunk Decoded: Business Transactions vs Business IQ

It’s the morning of Black Friday, and your e-commerce site is handling 10x normal traffic. Orders are flowing, ...

Fastest way to demo Observability

I’ve been having a lot of fun learning about Kubernetes and Observability. I set myself an interesting ...