Getting Data In

IIS Heavy Forwarder Translation

djl
Explorer

We are working through a staged migration where two splunk instances will be running in parallel for a while before we switch over. Because naming conventions are fun, we are going to adopt an entirely new convention for index naming in the new system.

To handle this we have a 7.2 heavy forwarder setup to do index translation based on the host that is sending in then send to our 7.2 development environment. We are not wanting to make any changes to the endpoints beyond a new outputs.conf file. Right now the UF's are setup to send twice, once to legacy and once to the new development heavy-forwarder.

I was cruising along just fine with Linux machines and test Windows machines until I hit the sourcetype [iis] a few unproductive days ago.

For some reason a simple take anything from this host and send it into this new index statement like I have below is not working for that one sourcetype and events continue to be sent to the legacy index on the new indexer. All of the other sourcetypes processed from that host including anything destined for _ indexes are being sent to the new index (which is fine with me during the transition period).

#props.conf
[host::LIB-IISTEST1]
TRANSFORMS-index-lib-iistest1 = host_index_routing_lib-iistest1

#transforms.conf
[host_index_routing_lib-iistest1]
DEST_KEY=_MetaData:Index
REGEX=.*
FORMAT=servers-windows_library

I have gone as far as trying to hijack all of the iis sourcetypes and send them to a new sourcetype named iis_translated and that is not working either. I suspect that it is related to the iis sourcetype being a known sourcetype and something with the parsed data.

Any suggestions?

0 Karma
1 Solution

djl
Explorer

To answer my own question, yes this is because the iis data is fully extracted and overriding this behavior was very complicated though doable. My solution ends up being to re-parse the data completely with the following solution - https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

It retains the original host and source information and only changes the index.

What follows will apply to all data that the heavy processes, but in our case this is fine as a temporary migration solution.

#inputs.conf
[splunktcp]
route=has_key:_utf8:parsingQueue;has_key:_linebreaker:parsingQueue;absent_key:_utf8:parsingQueue;absent_key:_linebreaker:parsingQueue
connection_host = ip

#props.conf
[host::LIB-IISTEST1]
TRANSFORMS-index-lib-iistest1 = host_index_routing_lib-iistest1

#transforms.conf
[host_index_routing_lib-iistest1]
DEST_KEY=_MetaData:Index
REGEX=.*
FORMAT=servers-windows_library

View solution in original post

0 Karma

djl
Explorer

To answer my own question, yes this is because the iis data is fully extracted and overriding this behavior was very complicated though doable. My solution ends up being to re-parse the data completely with the following solution - https://answers.splunk.com/answers/97918/reparsing-cooked-data-coming-from-a-heavy-forwarder-possibl...

It retains the original host and source information and only changes the index.

What follows will apply to all data that the heavy processes, but in our case this is fine as a temporary migration solution.

#inputs.conf
[splunktcp]
route=has_key:_utf8:parsingQueue;has_key:_linebreaker:parsingQueue;absent_key:_utf8:parsingQueue;absent_key:_linebreaker:parsingQueue
connection_host = ip

#props.conf
[host::LIB-IISTEST1]
TRANSFORMS-index-lib-iistest1 = host_index_routing_lib-iistest1

#transforms.conf
[host_index_routing_lib-iistest1]
DEST_KEY=_MetaData:Index
REGEX=.*
FORMAT=servers-windows_library
0 Karma

woodcock
Esteemed Legend

Don't forget to click Accept and to UpVote any other helpful comments/answers.

0 Karma

woodcock
Esteemed Legend

The problem is that you are almost certainly using INDEXED_EXTRACTIONS=w3c to collect the IIS logs (which is the correct thing to do). When you use that feature, the data is actually cooked on the UF which means that all of your props.conf and transforms.conf must also be on the UF for them to take effect. I know that it sounds crazy but just try it.

0 Karma

djl
Explorer

We cannot change the endpoints beyond the outputs.conf file. Is there a way to reparse the data completely through the heavy forwarder?

0 Karma

woodcock
Esteemed Legend

There is a VERY good reason to use INDEXED_EXTRACTIONS for IIS; in fact, it was created JUST FOR this sourcetype! The problem is that IIS admins may change the number and order of the fields at any time, so if you use a traditional hard-coded field name/order, you will almost certain to have it go haywire in the future and NOT NOTICE for a long time.

In any case, the answer to your question is: Yes; just remove the INDEXED_EXTRACTIONS line and then handle the field extractions in the traditional way with props.conf on your search heads.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...