All Apps and Add-ons

Can we run the kafka modular input on a forwarder?

Path Finder

We don't use the UI in our deployment at all for configuration, but use the config files.
How would we configure a Kafka modular input on a forwarder so it can distribute the data to all indexers?

1 Solution

Ultra Champion

You certainly can. Any Modular Input can be setup on a UF (where there is no Web UI to configure it).

When you use the Web UI , the fields you configure simply get persisted to inputs.conf in the background.

So when deploying on a UF , you just have to edit your inputs.conf stanza yourself , and then restart the UF.

Example stanza :

alt text

Also , since UFs do not ship with a Python runtime , you will need to ensure that there is a system Python 2.7 runtime installed.

View solution in original post

New Member

can we get a sample inputs.conf for kafka pasted here .
thanks in advance

0 Karma

Ultra Champion

You certainly can. Any Modular Input can be setup on a UF (where there is no Web UI to configure it).

When you use the Web UI , the fields you configure simply get persisted to inputs.conf in the background.

So when deploying on a UF , you just have to edit your inputs.conf stanza yourself , and then restart the UF.

Example stanza :

alt text

Also , since UFs do not ship with a Python runtime , you will need to ensure that there is a system Python 2.7 runtime installed.

View solution in original post

Contributor

As per the code, [https://github.com/damiendallimore/SplunkModularInputsJavaFramework/blob/master/src/com/splunk/modin...], APP is supposed to be deployed in the same Splunk Instance where the HEC is enabled (because HEC end-point is hardcoded to localhost rather than taking it as an input unlike the port etc). Is there a strong reason this approach?

Also, I am trying to test the performance/through-put of this app, and it looks like I am not able to post more than 800 messages/sec to HEC end-point. Do you by any chance have any benchmark/metrics on how much load can this app handle? I have topics where 4k mesages were produced per second.

0 Karma

Path Finder

Thanks Damien, I was just not sure if I can specify the index and sourcetype in the same stanza. I'll verify this soon.

0 Karma

Contributor

@Damien - Can you pleas share that example stanza? Somehow the snapshot is not loading. Also, when you deploy the app on Universal Forwarder, do we just untar the app and create the inputs.conf (would this go into /etc/system/local ??) and restart the forwarder? Please advice. Thanks!

0 Karma

Ultra Champion

[kafka://kafkatest]
disabled = 1
group
id = mytestgroup
index = main
sourcetype = kafka
topicname = test
zookeeper
connecthost = localhost
zookeeper
connectport = 2181
message
handlerparams =
additional
jvmpropertys =
zookeeper
connectrawstring =
output
type = stdout

0 Karma

Ultra Champion

inputs.conf should go in a local directory in /etc/apps (in whatever app makes sense in your environment)

0 Karma

Influencer

If you are facing issues, I think better idea would be to email to ddallimore@splunk.com . He has initiated this project.

Influencer

I haven't used this. As per documentation, the inputs.conf configurations are:
[kafka://name]

name of the topic

topic_name =

consumer connection properties

zookeeperconnecthost =
zookeeperconnectport =
groupid =
zookeeper
sessiontimeoutms =
zookeepersynctimems =
auto
commitintervalms =
additionalconsumerproperties =

message handler

messagehandlerimpl =
messagehandlerparams =

additional startup settings

additionaljvmpropertys =

0 Karma