Getting Data In

Why is splunk udp 514 syslog going straight to indexer and bypassing a second set of heavy forwarders?

leonaheidern2
Loves-to-Learn Everything

hi all

I am running on a windows heavy forwarder on Splunk Enterprise 8.1.7.2 and I listen to ports tcp 9514 and udp 514.

The data comes in to the main index and I perform a transforms/ props to a other index and the logs go into my indexers and search heads (both search head and indexers are redhat 7.9 splunk enteprise 8.2.0)

However in my heavy forwarders i send a copy off to another set of splunk redhat 7.9 heavy forwarders but it seems anything besides the default splunk logs on tcp 9997 does not reach them

My config is follows

 

Inputs.conf

[tcp:9514]
disabled = false
connection_host=ip
index =main

##inputs.conf

[udp:9514]
disabled = false
connection_host=ip
index =main

[udp:514]
disabled = false
connection_host=ip
index =main

[tcp:514]
disabled = false
connection_host=ip
index =main

 

 

##transforms.conf

[index_redirect_to_pci]
REGEX = .
DEST_KEY = _MetaData:Index
FORMAT = pci

### props.conf
[host::x.x.x.x]
TRANSFORMS-rt1 = host_rename_rt1,index_redirect_to_pci

 

How do I get the logs for the 514 and 9514 to be forwarded to the second set of heavy forwarders

I have one redhat heavy forwarder that I installed syslog-ng on and change splunk to monitor that folder and remove the listen to port 514 and that's the only splunk heavy forwarder that can send syslog data over to the second set of splunk that is not receiving the logs from the transformed index

0 Karma

PickleRick
SplunkTrust
SplunkTrust

You're overcomplicating things in the first place. Instead of sending data straight to the right index you're doing some transforms rewriting metadata which only eats your cpu pointlessly. If you sent the events into proper index in the first place, you could have even used UF instead of HF since you wouldn't need all this transforms magic.

And to manipulate, redirect, filter syslog... stick to the syslog daemon (rsyslog/syslog-ng). It's easier, it's more straightforward, and more flexible than handling syslog directly by splunk forwarder. Yes, it can listen on raw tcp or udp port and receive events but it's not something that the forwarder excels at.

0 Karma

leonaheidern2
Loves-to-Learn Everything

The equipment are network devices or app driven ones that clients are unable to install the universal forwarder for.

One of the heavy forwarders is a windows server . Ports wise everything uses port 514 from the devices side hence the transforms and props ended up in the main index first then being transformed to other indexes

  • Now I have a requirement to forward the logs before it gets to my indexer. Can this be done without syslog-ng or rsyslog
    • I have another set of heavy forwarder that is listening to udp 514 in data inputs but also using syslog-ng rules that somehow is unable to send the logs to this other set of heavy forwarders midway to my indexer. Do I need to remove the data inputs in Splunk web to get it to send as well?

 

 

 

 

 

 

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Of course. Syslog is very popular for hosts that you cannot install UF on (such as network equipment, storage arrays and many other kinds of stuff).

But you need to be able to receive those syslog messages somewhere. Theroretically, you can create an input on a Universal Forwarder, a Heavy Forwarder or even on an indexer but it's way more convenient and maintainable to do this using proper syslog server.

From your questions I suppose you're trying to do some strange things such as trying to listen on a socket that is already used by your local syslog daemon (that's one of the reasons why it's best to delegate the syslog processing to a proper syslog daemon).

Furthermore, to listen on port 514 you'd need to run splunk as root user which is not a very good idea.

 

0 Karma

leonaheidern2
Loves-to-Learn Everything

Hi Patrick . The transform and props were part of a legacy configuration when my splunk servers were still on windows

When we migrated the splunk heavy forwarder over to redhat we zipped and unzipped leaving the configuration intact 

Since everything was working we didn't feel the need to change the configuration till the new requirement came up. However since it was done by a vendor and another colleague from another department . All i was left with was a single transforms and conf and multiple ip address with transforms and props to multiple indexes so i sort of. Inherited the infrastructure without the documentation and am trying to reverse engineer it without breaking things

 

 

 

 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Hi

as @PickleRick said your configuration is too complex as you already know and told the reason. My proposal is that you should set up syslog server (rsyslog or syslog-ng) and use is as receiver for all different syslog events. Then write those to separate files/directories and then use UF as incest those to splunk. Then you can easily define to which index you should use based on directory/files.

As a bonus you will get more secure environment with syslog than splunk UF/HF which is running as root.

r. Ismo

0 Karma

PickleRick
SplunkTrust
SplunkTrust

I'm not a great fan of the "write to files then ingest files" concept. IMO it consumes too much IOPS for no reason. I know it was the recommended version of receiving syslog data but I assume it was a solution back from when syslog daemond couldn't write to HEC directly. Now we can do it so let's do it 😉 This way we also don't need to care about rotating files.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

That's is really one option to use HEC as output on syslog. Another one is use tcp output with Splunk's metadata and send those directly to indexers' receiving port 😉 But lets try to keep these instructions as simple as possible and easier to manage without deep knowledge about many other things than splunk 😉

I also prefer to direct connection (with buffering/spooling) from syslog to splunk, but creating it, need some other tools etc. which are probably out of scope of this discussions.

r. Ismo

leonaheidern2
Loves-to-Learn Everything

Does anyone know how to get syslog-ng to start on tcp 9514 and udp 9514

One of the equipments sends on those two ports but I tried on a VM that is selinux enforcing and it couldn't start on those ports 

Strangely it can start on a udp and TCP 10514

I tried installing policycore-utils and adding the port in 

semanage port -m -t syslogd_port_t -p tcp 9514
semanage port -m -t syslogd_port_t -p udp 9514

 

 

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Do you get any errors? Is anything listening on 9514 already? Are you sure it's the SELinux issue? If so, did you try audit2why or audit2allow?

0 Karma

leonaheidern2
Loves-to-Learn Everything

Considering the zone it's on ,that test vm should not have anything listening on port 9514. Tried a netstat -ano | grep 9514 and didn't find that port also. I have allowed the port in firewall-cmd

I have not heard of audit2why and audit2allow. Maybe I could try those. 

 

 

 

0 Karma

leonaheidern2
Loves-to-Learn Everything

I got syslog-ng service to listen to those two ports after semanage

  • For some reason the syslog-ng service itself was the one holding onto those ports causing to to fail and  say I was restarting the syslog-ng service too quickly

I have some issues with the configuration though. As I am a syslog-ng novice, does each port protocol need to go to a different file even though I set it to blanket listen at the catch all level?

But somehow it can only listen to udp 514, tcp 514 type logs with the keyword isn't that present in the log file. In the network device itself there is no ability to specify the port protocol from the GUI.

 

My config for syslog-source ports is like below

 

source s_networkdevice {
udp(port(514));
tcp(port(514));
udp(port(10514));
tcp(port(9514));
udp(port(9514));
};

filter f_networkdevice{ match ipaddress };

destination d_networkdevice { file(“/home/syslog/logs/network/$HOST/$YEAR-$MONTH-$DAY.log” create_dirs(yes)); };

log { source(s_networkdevice); filter(f_networkdevice); destination(d_networkdevice); };

 

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

March Community Office Hours Security Series Uncovered!

Hello Splunk Community! In March, Splunk Community Office Hours spotlighted our fabulous Splunk Threat ...

Stay Connected: Your Guide to April Tech Talks, Office Hours, and Webinars!

Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars in April. This post ...