Getting Data In

What does sendCookedData actually do on a heavy forwarder (i.e. what does 'cooked' mean at a technical level)?

moonhound
Explorer

What transformations / processing happens when data is cooked on a heavy forwarder? Is it the same as the data being indexed just without local storage (barring also setting indexAndForward to true)? Or rather if an app says it has 'index time operations' will they happen during the heavy forwarder's processing of data? I can see that props.conf changes are applied but I don't have a lot of leeway for testing at the moment.

I have a heavy forwarder sitting in front of an indexer cluster as a means of load balancing / homogenizing data that doesn't play nicely with load balancing or splunk in general. Some apps that people here have requested we get set up say they can't handle indexing in an indexer cluster, so I'm trying to verify if we can shove those out onto the heavy forwarder and end up with usable data in our cluster.

0 Karma
1 Solution

rphillips_splk
Splunk Employee
Splunk Employee

This post gives an explanation of cooked vs raw data. (http://answers.splunk.com/answers/292/what-is-the-distinction-between-parsed-unparsed-and-raw-data.h...). As data comes into an indexer it passes through various queues (tcpin, parsingQueue, AggQueue,typingQueue,indexingQueue) before it is written to disk (http://docs.splunk.com/File:Cloggedpipeline.png). If you have a heavy forwarder (aka a full Splunk instance) the data will get processed through these queues at the heavy forwarder and not again at the indexer.
If you have an app that requires props/transforms.conf that typically would be placed on an indexer, you will want to place those on the heavy forwarder instead since the indexer will not re-parse the data after it has been processed by the heavy forwarder.

View solution in original post

rphillips_splk
Splunk Employee
Splunk Employee

This post gives an explanation of cooked vs raw data. (http://answers.splunk.com/answers/292/what-is-the-distinction-between-parsed-unparsed-and-raw-data.h...). As data comes into an indexer it passes through various queues (tcpin, parsingQueue, AggQueue,typingQueue,indexingQueue) before it is written to disk (http://docs.splunk.com/File:Cloggedpipeline.png). If you have a heavy forwarder (aka a full Splunk instance) the data will get processed through these queues at the heavy forwarder and not again at the indexer.
If you have an app that requires props/transforms.conf that typically would be placed on an indexer, you will want to place those on the heavy forwarder instead since the indexer will not re-parse the data after it has been processed by the heavy forwarder.

moonhound
Explorer

That's very helpful, thank you!

0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...