Getting Data In

What does sendCookedData actually do on a heavy forwarder (i.e. what does 'cooked' mean at a technical level)?

moonhound
Explorer

What transformations / processing happens when data is cooked on a heavy forwarder? Is it the same as the data being indexed just without local storage (barring also setting indexAndForward to true)? Or rather if an app says it has 'index time operations' will they happen during the heavy forwarder's processing of data? I can see that props.conf changes are applied but I don't have a lot of leeway for testing at the moment.

I have a heavy forwarder sitting in front of an indexer cluster as a means of load balancing / homogenizing data that doesn't play nicely with load balancing or splunk in general. Some apps that people here have requested we get set up say they can't handle indexing in an indexer cluster, so I'm trying to verify if we can shove those out onto the heavy forwarder and end up with usable data in our cluster.

0 Karma
1 Solution

rphillips_splk
Splunk Employee
Splunk Employee

This post gives an explanation of cooked vs raw data. (http://answers.splunk.com/answers/292/what-is-the-distinction-between-parsed-unparsed-and-raw-data.h...). As data comes into an indexer it passes through various queues (tcpin, parsingQueue, AggQueue,typingQueue,indexingQueue) before it is written to disk (http://docs.splunk.com/File:Cloggedpipeline.png). If you have a heavy forwarder (aka a full Splunk instance) the data will get processed through these queues at the heavy forwarder and not again at the indexer.
If you have an app that requires props/transforms.conf that typically would be placed on an indexer, you will want to place those on the heavy forwarder instead since the indexer will not re-parse the data after it has been processed by the heavy forwarder.

View solution in original post

rphillips_splk
Splunk Employee
Splunk Employee

This post gives an explanation of cooked vs raw data. (http://answers.splunk.com/answers/292/what-is-the-distinction-between-parsed-unparsed-and-raw-data.h...). As data comes into an indexer it passes through various queues (tcpin, parsingQueue, AggQueue,typingQueue,indexingQueue) before it is written to disk (http://docs.splunk.com/File:Cloggedpipeline.png). If you have a heavy forwarder (aka a full Splunk instance) the data will get processed through these queues at the heavy forwarder and not again at the indexer.
If you have an app that requires props/transforms.conf that typically would be placed on an indexer, you will want to place those on the heavy forwarder instead since the indexer will not re-parse the data after it has been processed by the heavy forwarder.

moonhound
Explorer

That's very helpful, thank you!

0 Karma
Get Updates on the Splunk Community!

Built-in Service Level Objectives Management to Bridge the Gap Between Service & ...

Wednesday, May 29, 2024  |  11AM PST / 2PM ESTRegister now and join us to learn more about how you can ...

Get Your Exclusive Splunk Certified Cybersecurity Defense Engineer at Splunk .conf24 ...

We’re excited to announce a new Splunk certification exam being released at .conf24! If you’re headed to Vegas ...

Share Your Ideas & Meet the Lantern team at .Conf! Plus All of This Month’s New ...

Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data ...