Deployment Architecture

Reparsing cooked data coming from a heavy forwarder. Possible?

Lucas_K
Motivator

We have a situation where a third party uses uf's on their source hosts which forward to their heavy forwarders which is then in turn forwarded to our uf's and then to our indexes.

We need to do some additional index time field extractions to improve performance (yes, i know about the search time vs index time extraction flexibility).

I've found that this isn't working and think that its quite possibly due to the data that we are receiving is already cooked. Checking the internal logs from the data's source shows "connectionType=cooked" but this could just be the normal uf sourcetype/host/index markings.

  1. Is there a way to see/tell if an intermediate heavy forwarder is further cooking data.
  2. Is it possible to re-parse what ever data we have to perform further index time field extractions.?

Update: I've been able to verify that I am able to create the index time field extractions when I reindex from re-created raw files using the same config. So this doesn't seem to be a config issue more of an already cooked/trying to reparse data issue.

Tags (2)

dfronck
Communicator

I've only got one splunktcp feed so maybe this is global but it works with and without the port so I'm assuming that the inclusion of the port without generating errors means it's working.

My lab Windows server has a v6.0.2 universal forwarder with the Windows TA installed. It sends to a heavy forwarder. The heavy forwarder sends to an indexer.

This is the inputs/props/transforms from the heavy forwarder. I couldn't get the inputs.conf to change the index so I did it in props/transforms.

inputs.conf
[splunktcp://11100]
route=has_key:_utf8:parsingQueue;has_key:_linebreaker:parsingQueue;absent_key:_utf8:parsingQueue;absent_key:_linebreaker:parsingQueue;
connection_host = ip

props.conf
[source::splunktcp:11100]
SEDCMD-removemessage = s/(?mis)(Token Elevation Type indicates|This event is generated|Subject fields indicate the account).*//g
TRANSFORMS-changeindex=changeindex

transforms.conf
[changeindex]
REGEX = .
DEST_KEY = _MetaData:Index
FORMAT = windows_lab

rphillips_splk
Splunk Employee
Splunk Employee

@dfronck, did this configuration above on your HF work out for you?

0 Karma

dfronck
Communicator

Yes. We're deployed to a couple hundred servers now running 6.2.2 UF to 3 6.2.3 HFs. We're using this as the source now instead of splunktcp.
[source::WinEventLog:Security]

0 Karma

coreyf311
Path Finder

did this work using splunktcp:port? I am trying to use this same config to rename index using source::splunktcp:port and it is not working.

0 Karma

Lucas_K
Motivator

ok found what I need to do.

http://splunk-base.splunk.com/answers/5528/forwarding-select-data-in-my-environment

[splunktcp]
route=has_key:_utf8:parsingQueue;has_key:_linebreaker:parsingQueue;absent_key:_utf8:parsingQueue;absent_key:_linebreaker:parsingQueue

unfortunately it globally applies so I can't do it on a per sourcetype basis as far as I can tell. 😕

jrodmantcell
Explorer

This method is unsupported, undocumented, and unsafe. Do not do this, there is no guarantee it won't be removed in the future, break unexpectedly on update, or break right now in an totally unexpected way.

dfronck
Communicator

Thanks for the help. It took me forever to get this working because I was assuming that if inputs.conf didn't change the index, it wasn't working.

Once I got the SEDCMD right, I saw the bs ms text was gone and just set the index in transforms.

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...