I have several AIX forwarders that are not liking what I think is a simple monitor. We are looking for 1 file to ingest, the read permissions are in place for the file. The _internal files are being ingested. From the forwarder everything appears correct, and from the deployment server everything appears correct. Only the 1 desired file is not being monitored. Please tell me what I am doing wrong.
Forwarder version 6.3.1
Here's the inputs.conf
[monitor:///home/notes/notesr4/IBM_TECHNICAL_SUPPORT/console.log] _TCP_ROUTING=test_cluster index=t_webapp sourcetype=COTS:Notes
Here's the outputs.conf
[tcpout] defaultGroup = test_cluster indexAndForward = false [tcpout:test_cluster] server = indexer1:9997,indexer2:9997 autoLB = true useACK = true maxQueueSize = 7MB
Here are the _internal messages types:
02-05-2016 12:35:53.181 -0500 ERROR TailingProcessor - Ran out of data while looking for end of header 02-05-2016 09:43:00.622 -0500 WARN DistributedPeerManager - feature=DistSearch not enabled for your license level 02-05-2016 09:43:00.349 -0500 WARN ClusteringMgr - Ignoring clustering configuration, the active license disables this feature. 02-05-2016 09:42:57.293 -0500 WARN ulimit - Core file generation disabled
I'm looking into the ulimit message
Here are the forwarder CLI results for forward-server and monitor
kodiak:/splunkforwarder/bin> ./splunk list forward-server Active forwards: indexer1:9997 Configured but inactive forwards: indexer2:9997 kodiak:/splunkforwarder/bin> ./splunk list monitor Monitored Directories: $SPLUNK_HOME/var/log/splunk/splunkd.log /splunkforwarder/var/log/splunk/audit.log /splunkforwarder/var/log/splunk/btool.log /splunkforwarder/var/log/splunk/conf.log /splunkforwarder/var/log/splunk/first_install.log /splunkforwarder/var/log/splunk/license_audit.log /splunkforwarder/var/log/splunk/license_usage.log /splunkforwarder/var/log/splunk/metrics.log /splunkforwarder/var/log/splunk/mongod.log /splunkforwarder/var/log/splunk/remote_searches.log /splunkforwarder/var/log/splunk/scheduler.log /splunkforwarder/var/log/splunk/searchhistory.log /splunkforwarder/var/log/splunk/splunkd-utility.log /splunkforwarder/var/log/splunk/splunkd.log /splunkforwarder/var/log/splunk/splunkd_access.log /splunkforwarder/var/log/splunk/splunkd_stderr.log /splunkforwarder/var/log/splunk/splunkd_stdout.log /splunkforwarder/var/log/splunk/splunkd_ui_access.log $SPLUNK_HOME/var/spool/splunk/...stash_new Monitored Files: $SPLUNK_HOME/etc/splunk.version /home/notes/notesr4/IBM_TECHNICAL_SUPPORT/console.log
All suggestions are welcome...thanks in advance
Given all the effort you've put in here, here's one more thing to look at. When you look at "multiple generations" of this file, are the first 256 bytes the same? I know some IBM tools like Websphere make the first 256 bytes almost always be the same. This can cause Splunk's "initial CRC" functionality to think a file has already been processed when it actually has not. If the first 256 bytes are always the same, then you can change in
inputs.conf to add
initCrcLength=512 (or some other number that makes sure there are always unique bytes).
Also, make sure that on your forwarders and indexers that there is no mention of
INDEXED_EXTRACTIONS= for your
COTS:Notes sourcetype. These files are not sufficiently structured to take advanced of indexed extractions.
If this doesn't work can you anonymize and gist a sample of the file?
I have facing same problem
I have check both indexer and forwarder both
[root@ser1 bin]# ./splunk cmd btool --debug props list COTS:Notes
and in indexer
-sh-4.1$ ./splunk cmd btool --debug props list COTS:Notes
I have another instance of UF in same server and it is working fine. Please help me to find if any solution
There are non "INDEXED_EXTRACTIONS". Using the
command I get the following:
file position 65536
file size 10485760
type open file
I am not sure why this is stopping at exactly 64kb, but it does not pull any data. Here's a sample of the file:
console_kodiak_2016_02_14@05_33_10.log [6684810:00009-01286] 02/21/2016 12:25:37 DIIOP Server: 10.109.6.122 connected [6684810:00009-01286] 02/21/2016 12:25:37 DIIOP Server: 10.109.6.122 disconnected [7667964:00121-29813] 02/21/2016 12:27:33 0 Transactions/Minute, 0 Notes Users [7929956:00002-00001] 02/21/2016 12:27:44 AMgr: Start executing agent 'Process PSXActions Process PSXActions' in 'testarea/APSvNew.nsf' by Exec utive '1' [7929956:00002-00001] 02/21/2016 12:27:44 AMgr: 'server/domain' is the agent signer of agent 'Process PSXActions Process PSXActions' in 'testarea/ APSvNew.nsf' [7929956:00002-00001] 02/21/2016 12:27:44 AMgr: 'Agent 'Process PSXActions Process PSXActions' in 'testarea/APSvNew.nsf' will run on behalf of 'kodiak/AES' [7929956:00002-00001] 02/21/2016 12:27:44 AMgr: Agent 'Process PSXActions Process PSXActions' in database 'testarea/APSvNew.nsf' signed by 'kod iak/AES' is running in Full Administrator mode [7929956:00002-00001] Agent 'Process PSXActions|Process PSXActions' calling script library 'lsFormNewResourceRequest': Agent signer 'CN=server/O=domain', Script library signer 'CN=server/O=domain'
Thank you for helping, I getting much more from you than I am Splunk Support.
My issue was unfortunately not listed in the configs provided. In my props.conf file I has specified a field delimiter, the data files did not have a header, so this configuration item broke the entire data collection. Once this was removed everything works as expected.
Some of the events used a "~" as a delimiter, I ended up using search time extractions to pull the fields.
Thank you for you help...
Does Notes lock the console.log file so that Splunk cannot read this file?
Further investigation into this has "changed" my understanding of the symptoms. The file in question is a running log of Lotus Notes core functions. It appends until this file reaches either a size or date limit, then archives the file and starts a new file. Here's what I know -
I am suspecting that Notes is appending this file in an unexpected method, such as it keeps the file open and does not actually append to this file.
Is Splunk expecting an "end of file" on the file to monitor it?
It sounds like permissions. Can you confirm that the user running Splunk can definitely read the file, and has directory traversal rights (x-bit) on all of the directories along the way? What happens if you copy the file to /tmp and monitor it from there? Are there any symlinks in the path?
I have verified permissions are not an issue, the full path is navigable, and the data can be "cat'd" either through direct access or navigation to the file.
I am getting a new error -
02-10-2016 12:38:57.702 -0500 ERROR TailingProcessor - Ignoring path="/home/notes/notesr4/IBM_TECHNICAL_SUPPORT/console.log" due to: Bug during applyPendingMetadata, header processor does not own the indexed extractions confs.
Not finding anything here in answers that seems to apply to his error. Thank you in advance...
Permissions are find, and there are no symbolic links in the path. Copying the file out to a test Splunk system handles the file without issue. I can "cat" the file also without issue under the user account running splunk.
I have installed AIX forwarders on dozens of servers here, this is the first time I've had this issue.
Thank you dwaddle for your answer. any other ideas?