Getting Data In

How to configure a universal forwarder on centos 7?

matt29600
Engager

Hello,

My problem is that the data I send with the forwarder does not reach splunk.

Here is how I configured the forwarder

First, I started the forwarder

./splunk start in $Splunk_Home/bin

Second, I configure the forwarder to connect to a receiving indexer and configure to connect to a deployment server and try

./splunk add forward-server Ip_of_splunk:9997
./splunk set deploy-poll Ip_of_splunk:8089

Third, I have configured inputs.conf to enter the logs I wanted to retrieve

[monitor:///var/log/secure.log]

index = logcentos

sourcetype = secure

[monitor:///var/log/httpd/access.log]
index = logapache
sourcetype = acces_log

Four, I configured the firewall

firewall-cmd --zone=public --add-port=9997/tcp --permanent
firewall-cmd --reload

Five, I restarted the forwarder

./splunk restart in $Splunk_Home/bin

when the restart is finished, I'll check the splunk web page and I see that nothing happened about the indexes I just configured.

I check that I didn't make any mistakes when I wrote the names of the indexes but no there is no mistake
I check if the forward-server is "active" and yes is active

So I don't know what the problem is because I have the "same" configuration as for a forwarder in windows which works

Thank you in advance for helping me find solutions

0 Karma
1 Solution

DavidHourani
Super Champion

Hi @matt29600,

Default read permission on the /var/log folder is for root only. It could be that your splunk forwarder doesn't have the read permission on the logs.

Try monitoring another file in /tmp for example with full read permission and check if it gets picked up by splunk.

If it does you will need to change the access list permissions for /var/log to allow splunk to read.

Cheers,
David

View solution in original post

matt29600
Engager

The first problem was that I hadn't opened the 8089 port of the firewall so I had this error

DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected

The second problem was that the forwarder didn't have the permissions for the files "/var/log/secure.log" and "/var/log/httpd/access.log".

Check @DavidHourani answer for the link to apply an ACL (to solve the problem with permissions)

Thank to @DavidHourani , @FrankVl and @oscar84x for helping me solve my problem

0 Karma

DavidHourani
Super Champion

Hi @matt29600,

Default read permission on the /var/log folder is for root only. It could be that your splunk forwarder doesn't have the read permission on the logs.

Try monitoring another file in /tmp for example with full read permission and check if it gets picked up by splunk.

If it does you will need to change the access list permissions for /var/log to allow splunk to read.

Cheers,
David

matt29600
Engager

Hi @DavidHourani

I add this configuration in the inputs.conf

[monitor:///var/log/mariadb/mariadb.log]
index = logmariadb  
sourcetype = mariadb

And I receive the logs of this configuration so I don't think it can come from permissions, well it would be weird that I can only for one.
But I'm still going to try monitoring another file in /tmp

0 Karma

DavidHourani
Super Champion

it's on /var/log/secure.log and the http folder, not on any new subfolder you create with the right permissions 🙂

0 Karma

matt29600
Engager

You're right, I tried replacing with log files from /tmp and it works.

0 Karma

DavidHourani
Super Champion

yeah I had the same problem before, you need to either change the permissions on your server logs (which i dont recommend) or apply an ACL as shown here :
https://serverfault.com/questions/258827/what-is-the-most-secure-way-to-allow-a-user-read-access-to-...

0 Karma

matt29600
Engager

Ok thank you

0 Karma

DavidHourani
Super Champion

most welcome ! please up-vote and accept if this was helpful !

0 Karma

FrankVl
Ultra Champion

Several things you can check:
- I assume the respective indexes were created on your central splunk instance?
- search over all time, in case perhaps device clock or timestamp extraction is not correct and events are hidden somewhere else on the timeline
- does index=_internal show events from this forwarder? If so: does it contain any errors or warnings for those specific inputs? Does it contain events reporting that the respective monitors have started?
- if no internal events either: check $splunk_home/var/log/splunk/splunkd.log locally on the forwarder for any errors or warnings. Does it contain events reporting that the respective monitors have started?

0 Karma

matt29600
Engager

Yes the indexes have been created on the central splunk instance

There is no problem with the time

In index=_internal there are event from this forwarder which contains this error

"DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected"

With this error I noticed that I had not activated the port 8089 in the firewall but I still don't receive the logs in the indexes

0 Karma

FrankVl
Ultra Champion

If the internal logs are being forwarded, the issue clearly is not with the forwarding side of things. 2 main options remaining: an issue on the input side, or for some reason the data is indexed, but you can't see it (hence my suggestion to look at all time).

Probably best to first confirm whether the input is working. Should be quite clear in the internal logs whether the input started and whether it picked up any of the files, or whether it is reporting errors / warnings.

0 Karma

matt29600
Engager

I restarted the server to see the internal logs at startup.
I could see in the logs that the forwarder picked the file (for input) enter code here

Tailing Procesor Adding watch on path "path configured in input.conf"

I also noticed that but I don't know if this is an error.

Serverconfig Found no site defined in server.conf

Failed to initialize http_proxy from server.conf      -> same error for https_proxy and no_proxy

But this time to my great surprise I received the logs for mariadb ( configure in inputs.conf this morning)
configuration:

[monitor:///var/log/mariadb/mariadb.log]
index = logmariadb  
sourcetype = mariadb

So the problem may be the configuration of inputs.conf but I don't see where the mistakes could be in this case.

0 Karma

oscar84x
Contributor

*What is your outputs configuration? (compare the outputs to your known good configuration on the other forwarder)
*Is this going to an existing indexer that has already been receiving data successfully or to a new indexer?
*If it's a new indexer then step 4, opening port 9997, should be run on the indexer. Your wording makes it
sounds like it might've been run on the forwarder.

0 Karma

matt29600
Engager

The outputs configuration is

[tcpout]
defaultGroup = default-autolb-group
[tcpout-server://172.16.0.49:9997]
[tcpout:default-autolb-group]
disabled = false
server = 172.16.0.49:9997

To compare with the right configuration there is just the "disabled = false" in addition
Yes it's a new indexer and the opening port 9997 run on the indexer

0 Karma

oscar84x
Contributor

Are there any errors or messages regarding outputs on the forwarder's splunkd.log? Anything that confirms that it's monitoring the files?

0 Karma

matt29600
Engager

Yes there's this message since yesterday from 4:22 p. m.

INFO  HttpPubSubConnection - Running phone uri=/services/broker/phonehome/connection_172.16.0.48_8089_172.16.0.48_srv-test-splunk_3079F50F-8312-40B1-B17B-A33BCB6BCEC2

This part "connection_172.16.0.48_8089_172.16.0.48_srv-test-splunk_3079F50F-8312-40B1-B17B-A33BCB6BCEC2" means he's trying to connect to this ip?

There is this message since I opened the firewall port 8089 on the central splunk instance to fix this error

    DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected

and the last "different" message there was

 INFO  DC:HandshakeReplyHandler - Handshake done.

So I guess I fixed the mistake with the handshake

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...