Getting Data In

Why are my Splunk 6.3.1 AIX forwarders only forwarding _internal data, not the file being monitored?

gbowden_pheaa
Path Finder

I have several AIX forwarders that are not liking what I think is a simple monitor. We are looking for 1 file to ingest, the read permissions are in place for the file. The _internal files are being ingested. From the forwarder everything appears correct, and from the deployment server everything appears correct. Only the 1 desired file is not being monitored. Please tell me what I am doing wrong.

Forwarder version 6.3.1

Here's the inputs.conf

[monitor:///home/notes/notesr4/IBM_TECHNICAL_SUPPORT/console.log]
_TCP_ROUTING=test_cluster
index=t_webapp
sourcetype=COTS:Notes

Here's the outputs.conf

[tcpout]
defaultGroup = test_cluster
indexAndForward = false

[tcpout:test_cluster]
server = indexer1:9997,indexer2:9997
autoLB = true
useACK = true
maxQueueSize = 7MB

Here are the _internal messages types:

02-05-2016 12:35:53.181 -0500 ERROR TailingProcessor - Ran out of data while looking for end of header
02-05-2016 09:43:00.622 -0500 WARN  DistributedPeerManager - feature=DistSearch not enabled for your license level
02-05-2016 09:43:00.349 -0500 WARN  ClusteringMgr - Ignoring clustering configuration, the active license disables this feature.
02-05-2016 09:42:57.293 -0500 WARN  ulimit - Core file generation disabled

I'm looking into the ulimit message

Here are the forwarder CLI results for forward-server and monitor

kodiak:/splunkforwarder/bin> ./splunk list forward-server
Active forwards:
        indexer1:9997
Configured but inactive forwards:
        indexer2:9997

kodiak:/splunkforwarder/bin> ./splunk list monitor
Monitored Directories:
        $SPLUNK_HOME/var/log/splunk/splunkd.log
                /splunkforwarder/var/log/splunk/audit.log
                /splunkforwarder/var/log/splunk/btool.log
                /splunkforwarder/var/log/splunk/conf.log
                /splunkforwarder/var/log/splunk/first_install.log
                /splunkforwarder/var/log/splunk/license_audit.log
                /splunkforwarder/var/log/splunk/license_usage.log
                /splunkforwarder/var/log/splunk/metrics.log
                /splunkforwarder/var/log/splunk/mongod.log
                /splunkforwarder/var/log/splunk/remote_searches.log
                /splunkforwarder/var/log/splunk/scheduler.log
                /splunkforwarder/var/log/splunk/searchhistory.log
                /splunkforwarder/var/log/splunk/splunkd-utility.log
                /splunkforwarder/var/log/splunk/splunkd.log
                /splunkforwarder/var/log/splunk/splunkd_access.log
                /splunkforwarder/var/log/splunk/splunkd_stderr.log
                /splunkforwarder/var/log/splunk/splunkd_stdout.log
                /splunkforwarder/var/log/splunk/splunkd_ui_access.log
        $SPLUNK_HOME/var/spool/splunk/...stash_new
Monitored Files:
        $SPLUNK_HOME/etc/splunk.version
        /home/notes/notesr4/IBM_TECHNICAL_SUPPORT/console.log

All suggestions are welcome...thanks in advance

0 Karma

ddrillic
Ultra Champion

All the ingestion process is recorded in the /opt/splunk//var/log/splunk/splunkd.log log file. Please go through it carefully.

0 Karma

dwaddle
SplunkTrust
SplunkTrust

Given all the effort you've put in here, here's one more thing to look at. When you look at "multiple generations" of this file, are the first 256 bytes the same? I know some IBM tools like Websphere make the first 256 bytes almost always be the same. This can cause Splunk's "initial CRC" functionality to think a file has already been processed when it actually has not. If the first 256 bytes are always the same, then you can change in inputs.conf to add initCrcLength=512 (or some other number that makes sure there are always unique bytes).

Also, make sure that on your forwarders and indexers that there is no mention of INDEXED_EXTRACTIONS= for your COTS:Notes sourcetype. These files are not sufficiently structured to take advanced of indexed extractions.

If this doesn't work can you anonymize and gist a sample of the file?

sandipan11
Path Finder

I have facing same problem

I have check both indexer and forwarder both
[root@ser1 bin]# ./splunk cmd btool --debug props list COTS:Notes
[root@ser1bin]#

and in indexer

-sh-4.1$ ./splunk cmd btool --debug props list COTS:Notes
-sh-4.1$

I have another instance of UF in same server and it is working fine. Please help me to find if any solution

0 Karma

gbowden_pheaa
Path Finder

There are non "INDEXED_EXTRACTIONS". Using the
https://localhost:8089/services/admin/inputstatus/TailingProcessor:FileStatus
command I get the following:

e/notes/notesr4/IBM_TECHNICAL_SUPPORT/console.log

file position 65536
file size 10485760
percent 0.62
type open file

I am not sure why this is stopping at exactly 64kb, but it does not pull any data. Here's a sample of the file:

console_kodiak_2016_02_14@05_33_10.log
[6684810:00009-01286] 02/21/2016 12:25:37   DIIOP Server: 10.109.6.122 connected
[6684810:00009-01286] 02/21/2016 12:25:37   DIIOP Server: 10.109.6.122 disconnected
[7667964:00121-29813] 02/21/2016 12:27:33   0 Transactions/Minute, 0 Notes Users
[7929956:00002-00001] 02/21/2016 12:27:44   AMgr: Start executing agent 'Process PSXActions Process PSXActions' in 'testarea/APSvNew.nsf' by Exec
utive '1'
[7929956:00002-00001] 02/21/2016 12:27:44   AMgr: 'server/domain' is the agent signer of agent 'Process PSXActions Process PSXActions' in 'testarea/
APSvNew.nsf'
[7929956:00002-00001] 02/21/2016 12:27:44   AMgr: 'Agent 'Process PSXActions Process PSXActions' in 'testarea/APSvNew.nsf' will run on behalf of
'kodiak/AES'
[7929956:00002-00001] 02/21/2016 12:27:44   AMgr: Agent 'Process PSXActions Process PSXActions' in database 'testarea/APSvNew.nsf' signed by 'kod
iak/AES' is running in Full Administrator mode
[7929956:00002-00001] Agent 'Process PSXActions|Process PSXActions' calling script library 'lsFormNewResourceRequest': Agent signer 'CN=server/O=domain', Script library signer 'CN=server/O=domain'

Thank you for helping, I getting much more from you than I am Splunk Support.

0 Karma

dwaddle
SplunkTrust
SplunkTrust

Hmm, so .. can you paste the output of: splunk cmd btool --debug props list COTS:Notes ? From both your forwarder and indexer please. This has the smell of a bug.

0 Karma

dwaddle
SplunkTrust
SplunkTrust

News on this issue? You piqued my interest..

0 Karma

gbowden_pheaa
Path Finder

My issue was unfortunately not listed in the configs provided. In my props.conf file I has specified a field delimiter, the data files did not have a header, so this configuration item broke the entire data collection. Once this was removed everything works as expected.

Some of the events used a "~" as a delimiter, I ended up using search time extractions to pull the fields.

Thank you for you help...

0 Karma

gbowden_pheaa
Path Finder

Does Notes lock the console.log file so that Splunk cannot read this file?

Further investigation into this has "changed" my understanding of the symptoms. The file in question is a running log of Lotus Notes core functions. It appends until this file reaches either a size or date limit, then archives the file and starts a new file. Here's what I know -

  1. The console.log file will not forward data to the indexers, all other splunk files will from the same server. This tells me the base configurations on the forwarder are correct.
  2. Running a "splunk show monitor: lists the file we want to monitor
  3. Copying the active file to the /tmp directory still does not allow monitoring
  4. Copying his copy to another splunk system (dev) does not allow splunk to monitor this file
  5. Copying an archived file to the dev system DOES read and ingest the file
  6. Permissions to allow Splunk to traverse the file system to this file are in place, but one of the directories is hidden - there is an extended ACL on this directory to allow traversing across it.

I am suspecting that Notes is appending this file in an unexpected method, such as it keeps the file open and does not actually append to this file.

Is Splunk expecting an "end of file" on the file to monitor it?

0 Karma

dwaddle
SplunkTrust
SplunkTrust

It sounds like permissions. Can you confirm that the user running Splunk can definitely read the file, and has directory traversal rights (x-bit) on all of the directories along the way? What happens if you copy the file to /tmp and monitor it from there? Are there any symlinks in the path?

gbowden_pheaa
Path Finder

I have verified permissions are not an issue, the full path is navigable, and the data can be "cat'd" either through direct access or navigation to the file.

I am getting a new error -

02-10-2016 12:38:57.702 -0500 ERROR TailingProcessor - Ignoring path="/home/notes/notesr4/IBM_TECHNICAL_SUPPORT/console.log" due to: Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.

Not finding anything here in answers that seems to apply to his error. Thank you in advance...

0 Karma

gbowden_pheaa
Path Finder

Permissions are find, and there are no symbolic links in the path. Copying the file out to a test Splunk system handles the file without issue. I can "cat" the file also without issue under the user account running splunk.

I have installed AIX forwarders on dozens of servers here, this is the first time I've had this issue.

Thank you dwaddle for your answer. any other ideas?

0 Karma

gbowden_pheaa
Path Finder

I just tried copying the file to /tmp and it still does not ingest it. Officially stumped...

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...