I know this has been covered and there are many answers, but from what I can tell, my inputs.conf is correct.
[monitor://C:\Program Files (x86)\Syslogd\Logs\*.txt] disabled = 0 index = malware sourcetype = malwarebytes
Basically, this was working in our dev environment, and when I pushed the same inputs.conf from dev to production, we are not getting any logs sent to Splunk. And splunkd does not show any errors:
TailingProcessor - Parsing configuration stanza: monitor://C:\Program Files (x86)\Syslogd\Logs\*.txt. 09-13-2016 16:21:57.812 -0700 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 09-13-2016 16:21:57.812 -0700 INFO TailReader - State transitioning from 1 to 0 (initOrResume). 09-13-2016 16:21:57.812 -0700 INFO TailingProcessor - Adding watch on path: C:\Program Files (x86)\Syslogd\Logs.
So I am not sure why it isn't ingesting properly.
Yes, I can see all the splunkd logs and windows logs getting ingested. As of this morning there is data in the malware index. I think because splunk read the files once in the dev environment, it did not read them again in the production environment? I am not sure as a new file was created last night/early this morning and that information was ingested. All the old *.txt files that were present while we were running tests in dev did not get ingested.
That seems weird to me as dev and prod are completely separate instances. But it is working now. Thanks all
can you pls give me the idea how it is working for you, I also have same issue and can see other security and windows event logs in splunk but when I configured some text file on 😧 drive it is not picking there in splunk.
Please guide me.
Just to clear up the obvious... Do you have -
An index named malware in production?
A props.conf file for the sourcetype malwarebytes in production?
Are there new log events in production?
Yes I have an index called malware in production
I do not have a props.conf for the sourcetype malware, as I did not need one in dev
Yesterday was the last day that files were written out to the directory that we are monitoring the *.txt files
And splunkd shows nothing out of the ordinary.
splunk btool inputs list monitor if your config is applied correctly and check
splunk list inputstatus if Splunk is reading the directory and the files?
Check if your service account has permission to read files in that directory?
Check on any intermediate parsing layer, if there are nullQueues configured?
index=_internal sourcetype=splunkd source=*metrics.log series=*Syslodg* over all time to get some information if data was sent?
Search the index over all time, maybe you have some timestamping issue?
tcpdump the traffic to see if the input instance is sending out events and they get lost somewhere?
This list is almost never-ending ... Good luck and I hope you find the missing puzzle piece.