Reporting

Track the intermediate forwarder that parsed my event.

manojsecsme
Explorer

In our current Splunk infrastructure we have a number of UF's pushing data to a layer of Intermediate forwarders which parses OR filters the data and pushes it to a layer of Indexers.

While trouble shooting any issue with a data which is already indexed is there any way to find out which Intermediate forwarder parsed this event, We are able to find the UF/data source from the host field and splunk_server field tells us the indexer where the data is stored or served from.

Labels (1)
0 Karma
1 Solution

venkatasri
SplunkTrust
SplunkTrust

Hi @manojsecsme 

Yes i was talking about same, if your host is actually a forwarder then that will help in future cases.

 

View solution in original post

0 Karma

venkatasri
SplunkTrust
SplunkTrust

Hi @manojsecsme 

Yes i was talking about same, if your host is actually a forwarder then that will help in future cases.

 

0 Karma

venkatasri
SplunkTrust
SplunkTrust

Hi @manojsecsme 

You could try this however i don't find any field that can give you the one that does the parsing. If you have group of intermediate forwarders usually UF stick to each one of them for sometime based on Auto LB algo.

If you know when the issue did happen you can issue following SPL, host would contain the intermediate forwarders that UF has been connected at that time.

 

index=_internal Metrics group=tcpin_connections (fwdType=uf OR fwdType=lwf) sourceHost=<your_uf_host> OR sourceip=<your_uf_ip> | table _time host

 

A permanent solution is to write index field for the sourcetype/source/host  that you are having issues, something like intermediate_forwarder which will get added to Interesting fields when you search that data. This will helpful for future use not for issues already occurred.

---

An upvote would be appreciated and Accept Solution if this reply helps!

 

0 Karma

manojsecsme
Explorer

Hi @venkatasri , Thanks for your response. I was already aware of getting the Intermediate Forwarder Host from the Splunk internal logs using the SPL you have given below but wanted a better option as it was not easy to trouble shoot or track.

With respect to the creation of the index time field, I did some further reading and looks like we can only extract index field from the raw data or from one of the Splunk metadata fields (Source Type, Source or Host).

In this case are you asking us to do like the below

[<sourcetype>]

SOURCE_KEY = MetaData:Host
REGEX = (.*)
FORMAT = intermediate_forwarder::$1
WRITE_META = true

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...