Hi,
I am having an issue when we are trying to extracts fields at the Heavy Forwarder level. We are in a shared Cloud Environment but some Heavy Forwarders are local, so we want these HFs do the field extraction, however it doesn't seem to work.
I created a transforms.conf and props.conf and when I tested it on my local Splunk instance without a Heavy Forwarder it does work:
Props.conf:
## Custom Extractions Meraki ##
TRANSFORMS-Logtype=Logtype
TRANSFORMS-pattern=pattern
TRANSFORMS-security_event_dtl=security_event_dtl
TRANSFORMS-message=message
TRANSFORMS-request=request
TRANSFORMS-src=src
TRANSFORMS-user=user
## Change user field ##
EVAL-user = replace(user, "\\\,\\\20", ",")
Transforms.conf
## Extract custom Meraki fields ##
[Logtype]
SOURCE_KEY = source
REGEX = \\meraki\\(?<Logtype>\w+)
[pattern]
SOURCE_KEY = _raw
REGEX = pattern:(?<pattern>.*)
[security_event_dtl]
SOURCE_KEY = _raw
REGEX = security_event\s(?<security_event_dtl>\w+)\s\w+
[message]
SOURCE_KEY = _raw
REGEX = message:(?<message>.*)
[request]
SOURCE_KEY = _raw
REGEX = request:\s\w+(?<request>.*)
[src]
SOURCE_KEY = _raw
REGEX = client_ip='(?<src>.*)
[user]
SOURCE_KEY = _raw
REGEX = CN=(?<user>.*?),OU
From my understanding it should be possible to make these fields extractions at the Heavy Forwarder level , correct?
I appreciate your help,
Oliver
Hey guys, thank you for your help. Looks like I needed an additional fields.conf file on my HF to extract the fields at Heavy Forwarder level
Are you sure? What was the setting?
If you are sure that your settings are correct, it must be something else. If you are doing a sourcetype override/overwrite, you must use the ORIGINAL value, NOT the new value. You must deploy your settings to the first full instance(s) of Splunk that handle the events (usually either the HF tier if you use one, or else your Indexer tier), restart all Splunk instances there, UNLESS you are using HEC's JSON endpoint (it gets pre-cooked) or INDEXED_EXTRACTIONS (configs go on the UF in that case). When (re)evaluating, you must send in new events (old events will stay broken), then test using _index_earliest=-5m
to be absolutely certain that you are only examining the newly indexed events.
We are in a shared environment with several companies sharing a Cloud environment, but we do have some autonomy on the Heavy Forwarder level which is why we want to extract the fields there.
Thank you for your reply, based on this documentation I wrote my .conf files, but they are not extracting in the described Heavy Forwarder situation.
Hi omuelle1,
why do you want to extract them on Heavy Forwarders?
Anyway, if you want to extract fields at index time see https://docs.splunk.com/Documentation/SplunkCloud/8.0.0/Data/Configureindex-timefieldextraction
Ciao.
Giuseppe
Hi omuelle1,
You could extract them at search time on Search Heads, so you haven't any restriction.
Ciao.
Giuseppe
I understand, but due our setup in this case we want to do it at the HF level. I mean that should be possible as well.