Getting Data In

Is it possible to combine the two functionalities of Indexing and Heavy Forwarding on a single VM host?

kevbod
New Member

Guys, I currently have Splunk Enterprise 6.5.0 Free running on a W2k8 R2 host and Universal Forwarders (Windows host) and direct syslog (unix host) feeding log data into it fine.

i am working here on the assumption that Splunk licensing comes into play at the indexer and that any paring down of unwanted events at the Heavy Forwarder will reduce the licensing liabilities? is this assumption correct?

I want to evaluate the Heavy Forwarder parsing functionality for data volume licensing reasons and am wondering if I can combine the two functionalities of the Heavy Forwarding and Indexing on a single VM host? I don't think I can but am trying to limit the number of VMs I have to run for this test scenario. Please refer to the high tech test scenario diagram below 🙂

External Host |                             | Splunk Enterprise Server     |
Log Data      |    >>>>>>>>>>>>>>>>>>>>>    | Heavy Forwarder >>>> Indexer |

Thanks in anticipation. kevbod

0 Karma
1 Solution

lguinn2
Legend

You don't need to run a heavy forwarder at all in your scenario. You can parse out the data that you want to drop on the indexer itself. Any data that arrives at the indexer but that is eliminated (prior to being written to disk) does not count against your daily license.

Look at the section named "Filter event data and send to queues" in the documentation here
Note that it says in the opening paragraph: "Although similar to forwarder-based routing, queue routing can be performed by an indexer, as well as a heavy forwarder." In my experience, it is more efficient to do this with the indexer. I would not run a heavy forwarder in your case.

View solution in original post

maciep
Champion

not sure if I'm misunderstanding, but you can drop unwanted events on the indexer too. That happens during the parsing phase which should happen on your indexer with the current setup.

This might be a good page to review as well - high level overview of the phases and where each phase happens in different types of splunk deployments.

http://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F

0 Karma

kevbod
New Member

Guys, many thanks for your prompt and very helpful replies. Apologies for the botched diagram, Its my first post on here and I should have looked at the preview before posting. DOH!

0 Karma

aaraneta_splunk
Splunk Employee
Splunk Employee

Don't worry about the formatting, @kevbod - that's why we have moderators for newbies on Answers 🙂

Also, if lguinn or maciep helped to answer your question, please don't forget to resolve this post by clicking "Accept" below the best answer--don't forget to up vote anything that was helpful too!

If you still need some additional help, however, please leave a comment with some feedback for them. Thanks!

0 Karma

lguinn2
Legend

You don't need to run a heavy forwarder at all in your scenario. You can parse out the data that you want to drop on the indexer itself. Any data that arrives at the indexer but that is eliminated (prior to being written to disk) does not count against your daily license.

Look at the section named "Filter event data and send to queues" in the documentation here
Note that it says in the opening paragraph: "Although similar to forwarder-based routing, queue routing can be performed by an indexer, as well as a heavy forwarder." In my experience, it is more efficient to do this with the indexer. I would not run a heavy forwarder in your case.

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...