Getting Data In

Why is data not getting parsed at Heavy Forwarder?

omuelle1
Communicator

Hi,

I am having an issue when we are trying to extracts fields at the Heavy Forwarder level. We are in a shared Cloud Environment but some Heavy Forwarders are local, so we want these HFs do the field extraction, however it doesn't seem to work.

I created a transforms.conf and props.conf and when I tested it on my local Splunk instance without a Heavy Forwarder it does work:

Props.conf:

## Custom Extractions Meraki ##
TRANSFORMS-Logtype=Logtype
TRANSFORMS-pattern=pattern
TRANSFORMS-security_event_dtl=security_event_dtl
TRANSFORMS-message=message
TRANSFORMS-request=request
TRANSFORMS-src=src
TRANSFORMS-user=user

## Change user field ##

EVAL-user = replace(user, "\\\,\\\20", ",")

Transforms.conf

## Extract custom Meraki fields ##

[Logtype]
SOURCE_KEY = source
REGEX = \\meraki\\(?<Logtype>\w+)

[pattern]
SOURCE_KEY = _raw
REGEX = pattern:(?<pattern>.*)

[security_event_dtl]
SOURCE_KEY = _raw
REGEX = security_event\s(?<security_event_dtl>\w+)\s\w+

[message]
SOURCE_KEY = _raw 
REGEX = message:(?<message>.*)

[request]
SOURCE_KEY = _raw
REGEX =  request:\s\w+(?<request>.*)

[src]
SOURCE_KEY = _raw
REGEX = client_ip='(?<src>.*)

[user]
SOURCE_KEY = _raw
REGEX = CN=(?<user>.*?),OU

From my understanding it should be possible to make these fields extractions at the Heavy Forwarder level , correct?

I appreciate your help,

Oliver

0 Karma

omuelle1
Communicator

Hey guys, thank you for your help. Looks like I needed an additional fields.conf file on my HF to extract the fields at Heavy Forwarder level

0 Karma

woodcock
Esteemed Legend

Are you sure? What was the setting?

0 Karma

woodcock
Esteemed Legend

If you are sure that your settings are correct, it must be something else. If you are doing a sourcetype override/overwrite, you must use the ORIGINAL value, NOT the new value. You must deploy your settings to the first full instance(s) of Splunk that handle the events (usually either the HF tier if you use one, or else your Indexer tier), restart all Splunk instances there, UNLESS you are using HEC's JSON endpoint (it gets pre-cooked) or INDEXED_EXTRACTIONS (configs go on the UF in that case). When (re)evaluating, you must send in new events (old events will stay broken), then test using _index_earliest=-5m to be absolutely certain that you are only examining the newly indexed events.

0 Karma

omuelle1
Communicator

We are in a shared environment with several companies sharing a Cloud environment, but we do have some autonomy on the Heavy Forwarder level which is why we want to extract the fields there.

Thank you for your reply, based on this documentation I wrote my .conf files, but they are not extracting in the described Heavy Forwarder situation.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi omuelle1,
why do you want to extract them on Heavy Forwarders?

Anyway, if you want to extract fields at index time see https://docs.splunk.com/Documentation/SplunkCloud/8.0.0/Data/Configureindex-timefieldextraction

Ciao.
Giuseppe

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi omuelle1,
You could extract them at search time on Search Heads, so you haven't any restriction.

Ciao.
Giuseppe

0 Karma

omuelle1
Communicator

I understand, but due our setup in this case we want to do it at the HF level. I mean that should be possible as well.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...