Getting Data In

Ingestion of etc/resolv.conf

anandhalagarasa
Path Finder

Hi Team,

We got an requirement to ingest /etc/resolv.conf file from all Linux & HP machines so I have created an app and then have deployed and pushed the configurations to all the Linux and HP machines using deployment master. The app has been received in all Linux machines & HP machines but when i navigate and search the data in Splunk Cloud i can able to see only for 10 host and the remaining 150+ hosts are not reporting into Splunk Cloud. Also the splunk user has the read permission to read those files but still the logs are not getting ingested into Splunk Cloud.

There are no error messages as well in splunkd.log in the client machines and the INFO event says that the stanza has been parsed but still i couldn't able to search the rest of the client machine logs into Splunk Cloud.

Stanza :

[monitor:///etc/resolv.conf]
sourcetype = abc
index = xz
disabled = 0

So kindly help to identify and let me know where is the gap so that I can check and update on the same.

Tags (2)
0 Karma

woodcock
Esteemed Legend

You are doing it wrong. You should not be using monitor, you should be using a scripted input with a shell script contains cat /etc/resolve.conf and is run on some schedule.

0 Karma

DavidHourani
Super Champion

Hi @anandhalagarasan,

So its working for some of your hosts but not for other.

Here's how I advise you to proceed:
1- Look at _internal for any errors if you're forwarding internal logs. Tip: If the internal logs are there that means there is no connectivity issue and the issue could be permission related.
2- Connect to one of the hosts where the forwarding is not working and have a look at splunkd.log for any errors you might have missed in _internal.
3- Check permission and confirm the status on the tailing processor to be sure that the file is being read :

 $SPLUNK_HOME/bin/splunk _internal call /services/admin/inputstatus/TailingProcessor:FileStatus

4- If the file has already been read and it's not a permission issue then try to add a comment at the beginning of the file to see if it's linked to the fishbucket. If so adding the comment will push the file out. Tip: If you're simply monitoring the file and nothing is being tailed + the header isn't changing then nothing will be sent to Splunk.

Let me know if that helps and what the outputs are so we can dig this deeper.

Cheers,
David

anandhalagarasa
Path Finder

@DavidHourani ,

Thank you for your swift response.
Answering your query i can able to see the internal logs for the servers where the etc/resolv.conf file is not getting ingested into Splunk Cloud. And also we have configured other OS logs which seems to be be getting ingested for the host except etc/resolv.conf file.

As mentioned in the third step I have ran the following command to check the status and i am getting an error as below:

          <s:dict>
            <s:key name="parent">/etc/resolv.conf</s:key>
            <s:key name="type">File did not match whitelist '(\.conf|\.cfg|config$|\.ini|\.init|\.cf|\.cnf|shrc$|^ifcfg|\.profile|\.rc|\.rules|\.tab|tab$|\.login|policy$)'.</s:key>
          </s:dict>
        </s:key>

I didn't tried the 4th step yet so where i am missing out. Kindly help to spot it out.

0 Karma

DavidHourani
Super Champion

If you read here " <s:key name="type">File did not match whitelist '(\.conf|\.cfg|config$|\.ini|\.init|\.cf|\.cnf|shrc$|^ifcfg|\.profile|\.rc|\.rules|\.tab|tab$|\.login|policy$)'.</s:key>"

Apparently the whitelist you created is not matching the file. Test modifying the input stanza's whitelist to better match resolv.conf

0 Karma

anandhalagarasa
Path Finder

@ DavidHourani,

I have cross verified the same command for a working server where the resolv.conf logs are getting ingested into Splunk Cloud and i got the same error for etc/resolv.conf but somehow the data from etc/resolv.conf file got ingested into Splunk Cloud.

And this is my whitelist which i pushed from Deployment master server.

[serverClass:all_test]
machineTypesFilter=linux-i686, linux-x86_64, HP-UX
whitelist.0 = *
restartSplunkd = true
[serverClass:all_test:app:resolve_app]

So what would be the exact whitelist which i need to use since we have around 180 + servers needs to be ingested into Splunk Cloud.

0 Karma

anandhalagarasa
Path Finder

@DavidHourani
Can you kindly help where i am missing.

0 Karma

DavidHourani
Super Champion

Hi @anandhalagarasan,

Could you please post your input stanza and whitelist not the serverclass whitelist XD

0 Karma

anandhalagarasa
Path Finder

Input Stanza app name (abc_xyz):
[monitor:///etc/resolv.conf]
sourcetype = xyz
index = abc
whitelist = *

disabled = 0

Server Class:
[serverClass:all_nix_servers]
machineTypesFilter=linux-i686, linux-x86_64
whitelist.0 = *
restartSplunkd = true
[serverClass:all_nix_servers:app:abc_xyz]

0 Karma

DavidHourani
Super Champion

Yeah that input doesn't, due to "cherry picking", you should use the default input stanza from linux TA ( which contains the following whitelist <s:key name="type">File did not match whitelist '(\.conf|\.cfg|config$|\.ini|\.init|\.cf|\.cnf|shrc$|^ifcfg|\.profile|\.rc|\.rules|\.tab|tab$|\.login|policy$)'.</s:key>)

And then add a line like this :

[source::...resolv.conf*]
sourcetype = xyz
0 Karma

anandhalagarasa
Path Finder

I have used this stanza as mentioned :

[monitor:///etc/resolv.conf]
sourcetype = xyz
index = abc
_whitelist=(.conf|.cfg|config$|.ini|.init|.cf|.cnf|shrc$|^ifcfg|.profile|.rc|.rules|.tab|tab$|.login|policy$)
[source::...resolv.conf*]
disabled = 0

But still those logs are not getting ingested into Splunk Cloud. should i need to add or correct anything in the inputs.

0 Karma

anandhalagarasa
Path Finder

Kindly help on the request.

0 Karma

anandhalagarasa
Path Finder

Can anyone help on this request.

0 Karma

xaratos
Explorer

What does the internal log say for the hosts that work?
Maybe you could check for the hosts who are missing on the internal logs. Would be nice to see what the index="_internal" says.
Also it might help to have an outputs.conf to send the data directly to the indexer.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...