Getting Data In

Ingesting Log Files in Windows - Some Logs Being Ingested and Some Failing

victorcorrea
Path Finder

Hi team,

I have been experiencing issues with log ingestion in a Windows Server and I was hoping to get some advice.

The files are generated in a mainframe and transmitted onto a local share in a windows server via TIBCO Jobs. The files are generated in 9 windows throughout the day -  3 files at a time varying in size from a few Mb to up to 3Gb.

The solution has worked fine in lower environments, likely, because of looser file/folder restrictions, but in PROD, only one or two files per window get ingested.

The logs in indicate that Splunk can't open or read the files:

victorcorrea_0-1730313704554.png

 

The running theory is that the process that is writing the files to the disk is locking them so Splunk can't read them. 

I'm currently reviewing the permission sets for the TIBCO Service Account and the Local System Account (Splunk UF runs as this account) in the lower environments to try and spot any differences that could be causing the issue - based on the information in the post below:

in addition to that, I was exploring the possibility of user the "monitornohandle" stanza as it seems to fit the use case I am dealing with - monitor single files that don't get updated frequently. But I haven't been able to determine, based on documentation, if I can use wildcards in the filename - for reference, this is the documentation I'm referring to:

I'd appreciate if I could get any insights from the community either regarding permission or the use of the "monitornohandle" input stanza.

Thanks in advance,

Labels (3)
0 Karma
1 Solution

victorcorrea
Path Finder

We have been able to validate that the issue was with the TIBCO File Watcher process locking the file until it completed writing it to the disk and, therefore, the Splunk UF could  not open/read the file to ingest it.

I wanted to check with the TIBCO people if there was a way to change the permissions with which the File Watcher process opened the file (ensure it had the FILE_SHARE_READ) but they suggested a simpler and just-as-effective solution.

TIBCO will create the files, initially, as ".tmp" files, so they won't match the name pattern on the monitor stanza. When the process of writing to disk has completed, TIBCO will drop the ".tmp" so the files match the monitor stanza.

That way, Splunk will only try to ingest the files that have been written into the disk and, therefore, are not locked.

View solution in original post

0 Karma

victorcorrea
Path Finder

We have been able to validate that the issue was with the TIBCO File Watcher process locking the file until it completed writing it to the disk and, therefore, the Splunk UF could  not open/read the file to ingest it.

I wanted to check with the TIBCO people if there was a way to change the permissions with which the File Watcher process opened the file (ensure it had the FILE_SHARE_READ) but they suggested a simpler and just-as-effective solution.

TIBCO will create the files, initially, as ".tmp" files, so they won't match the name pattern on the monitor stanza. When the process of writing to disk has completed, TIBCO will drop the ".tmp" so the files match the monitor stanza.

That way, Splunk will only try to ingest the files that have been written into the disk and, therefore, are not locked.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @victorcorrea ,

I usually use monitor command for this requirements.

I like to use batch command that removes files after reading but I experienced an issue with big files, so, for my knowledge, the best solution is monitor.

I never used MonitorNoHandle.

About the grants, you have to give the grants of reading to the splunk user or to its group.

Usually files to read have 644.

Ciao.

Giuseppe

victorcorrea
Path Finder

Ciao @gcusello ,

Thanks for chiming in.

The Universal Forwarder runs as the Local System account in this server, so it has full access to the folder and files.

I believe the issue might be with the TIBCO Process that writes the logs into the disk - and locks them while doing so. Since the files are large, Splunk tries to ingest them while they are still being written into disk and, therefore, locked by the TIBCO Process.

I wanted to try and add a delay to the Log Ingestion in the UF Settings but I am not really sure how to effectively achieve that.

Regards,
Victor

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @victorcorrea ,

for my knowledge you cannot add a delay in ingestions.

you could create a script that copies the files in another folder removing them after copying, so you're sure that they have the correct grants and no locks, but (I know it) it's a porkaround!

Ciao.

Giuseppe

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...