Getting Data In

How to achieve different index based on receiving port on heavy forwarder?

shubham87
Explorer

Hi All,

We are collecting different logs from same source on different UDP ports on Heavy forwarder. Heavy forwarder is forwarding these logs to Indexer on one port to indexer. We would like to send logs collected on different UDP ports on Heavy forwarder to be stored in different indexers on our indexer in order to apply different storage and retention policies.

Can someone guide me on how to achieve this?

Regards
Shubham

Labels (1)
0 Karma
1 Solution

ekost
Splunk Employee
Splunk Employee

I'm repeating the question, to check my understanding. You have data coming into a Heavy Forwarder (HF) on unique UDP ports, and you'd like to place that data into unique indexes on the Indexer? If so, just update the inputs.conf on the HF where you've defined the UDP ports, and add an index = $index_name to each input stanza. Please look at the answer in this Answers post for an example.

View solution in original post

0 Karma

bic
Explorer

go and edit the existing inputs.conf , and do you want to index to different index name along with indexers ?
if it is different indexers then you will have to edit outputs.conf as well like below

outputs.conf
[TCPout:IndexerA]
server=address:9997
[TCPout:IndexerB]
server=address2:9997

and then call these in your inputs.conf

[monitor:whateverUDP]
TCP_ROUTING=IndexerA
[monitor:UDP2]
TCP_ROUTING=IndexerB

in the inputs.conf you can specify different indexes as well along with routing.

0 Karma

ekost
Splunk Employee
Splunk Employee

I'm repeating the question, to check my understanding. You have data coming into a Heavy Forwarder (HF) on unique UDP ports, and you'd like to place that data into unique indexes on the Indexer? If so, just update the inputs.conf on the HF where you've defined the UDP ports, and add an index = $index_name to each input stanza. Please look at the answer in this Answers post for an example.

0 Karma

shubham87
Explorer

Thanks. But I am using GUI to create UDP receivers on HF. Shall i create inputs.conf under local folder on HF and use that?

0 Karma

ekost
Splunk Employee
Splunk Employee

The inputs.conf that creates the UDP listener might live under $SPLUNK_HOME/etc/system/local, or $SPLUNK_HOME/etc/apps/search/local. Remember that you can have multiple inputs.conf files, and that they're all merged to set the active configuration on the forwarder. If you want to determine where that inputs.conf file is using the CLI, leverage the BTOOL command ./splunk cmd btool inputs list --debugand look for your UDP stanzas. The debug portion of the tool command ensures that the file paths are shown. Yes, get used to editing the .conf files directly in a text editor, as there are options available that are not exposed in the UI.

0 Karma

shubham87
Explorer

Thanks. It works

0 Karma

ekost
Splunk Employee
Splunk Employee

Good news!

0 Karma

shubham87
Explorer

I have made changes as suggested and this is not working as intended. After making changes, i stop receiving logs on Indexer.

Following is output from BTOOL.

/opt/splunk/etc/apps/search/local/inputs.conf [udp://9514]
/opt/splunk/etc/system/default/inputs.conf _rcvbuf = 1572864
/opt/splunk/etc/apps/search/local/inputs.conf connection_host = ip
/opt/splunk/etc/system/local/inputs.conf host = "Hostname of HF"
/opt/splunk/etc/apps/search/local/inputs.conf index = meraki_logs
/opt/splunk/etc/apps/search/local/inputs.conf sourcetype = meraki

Can you please direct me how to troubleshoot?

0 Karma

shubhar87
New Member

Thanks. But i have defined UDP receivers using GUI on Heavy forwarder. Shall i create inputs.conf in local folder and use that?

0 Karma

adonio
Ultra Champion

start here:
http://docs.splunk.com/Documentation/SplunkCloud/6.6.0/Forwarding/Routeandfilterdatad
it lists the option you are seeking and gives a sample syntax

0 Karma

Phoenix254
Loves-to-Learn

facing similar issue. my setup however is on a local machine. i have splunk installed and the forwarder installed as well, splunk runs well and so does the forwarder. however i get  the folowing error. what should i do. is there a way i can still have the forwarder send the traffic to the splunk server even though they both are running locally. could this be due to a bottle neck on the port 9777 ?

The TCP output processor has paused the data flow. Forwarding to host_dest=XX.XX.xx.xx inside output group default-autolb-group from host_src=DESKTOP-USER has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data

 

later on after refreshing the page i get this error 

Now skipping indexing of internal audit events, because the downstream queue is not accepting data. Will keep dropping events until data flow resumes. Review system health: ensure downstream indexing and/or forwarding are operating correctly.

 

 

my outputs.conf 

[tcpout]
defaultGroup = default-autolb-group

[tcpout:default-autolb-group]
server = xxx.xxx.xx.xxx:9997

[tcpout-server://xxx.xxx.xx.xxx:9997]

 

my inputs.conf

[splunktcp://9997]
connection_host = xxx.xxx.xx.xxx

[monitor://C:\Snort\log\alert.ids]
disabled = false
sourcetype = snort_alert_full
source = snort
index = main

 

or the fact that the server and the forwarder are on the same machine they compete for resources ? . thanks for any asistance. still a noobie

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Digging up a 6 year old thread might not be the best way to get meaningful response.

I'd advise you to start a new thread with detailed description of what is your problem and what you attempted already to resolve it.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...