i have a syslog server which fowards application data to splunk indexers.
I added a new entry in my rsyslog config to ingest bit9 application logs. I also created the necessary syslog inputs atnd syslog props files on my deployment server.
i pushed it out to the syslog forwarder by reloading the necessary serverclass file and doing a btool on the forwarder, i can see the correct config:
/opt/splunkforwarder/bin/splunk btool inputs list --debug | grep bit9
infosec_sy [monitor:///var/syslog/bit9/*.log]
infosec_sy index = sep_bit9
infosec_sy sourcetype = bit9_eventlog
yet i cannot see any data when i search for this index. even though i can see data on the syslog server in the bit9 directory.
my deployment server:
syslog-inputs
[monitor:///var/syslog/bit9/*.log]
sourcetype = bit9_eventlog
index = sep_bit9
disabled = false
this inputs makes it to the syslog forwarder so i know the forwarder has the right config. yet i dont know why i am not getting data in my indexer.
anyone have any ideas i can proceed with? thanks in advance
Thanks Ayn, you guys were right, it was an oversight on my part. My indexes.conf file was missing this particular index. I added it , now my indexer has the index definition and can catalogue the bit9 log data.
thanks again for everyone's input.
In the web ui on the indexer, go to the page listing indexes and see if "sep_bit9" is there and active. It shouldn't since the indexer is saying it doesn't exist...
interesting tidbit....we have multiple syslog forwarders but only 1 that i am forwarding these logs to.
yet , when i do search : index=_internal /var/syslog/bit9/*.log
i get messages "Adding watch on path: /var/syslog/bit9/bit9.log." from all the other fowarders except the one that has these logs.But i know that it does take my inputs.conf because i tested commenting it out and pushed it out, and the inputs file on my forwader was commented out. when i uncomment and push it, it becomes uncommented.all i get from this one is Parsing configuration stanza:monitor:///var/syslog/bit9/bit9.log
i checked on my indexer /var/splunkhot/splunk/etc/apps/infosec_syslog_props
and my props.conf is in there
[bit9_eventlog]
SHOULD_LINEMERGE = false
MAX_TIMESTAMP_LOOKAHEAD = 16
TIME_FORMAT = %b %d %H:%M:%S
TRANSFORMS-host = infosec_syslog_host
Ayn
i have the index created in my inputs.conf file on my deployment server anspd i push this out to the indexers via a serverclass file.
how do i verify what indexes exist on my indexer?
You're sending events to a non-existant index on ptc-lpslapp701. You need to create the index.
update:i am getting the following warning/error message on my search head: Search peer ptc-lpsplapp701 has the following message: received event for unconfigured/disabled/deleted index='sep_bit9' with source='source::/var/syslog/bit9/bit9.log' host='host::ptc-wgbit9ps701' sourcetype='sourcetype::bit9_eventlog' (1 missing total)
most answers reference maybe a typo in the inputs file but my inputs file seems fine to me:
[monitor:///var/syslog/bit9/bit9.log]
sourcetype = bit9_eventlog
index = sep_bit9
disabled = false
ignoreOlderThan = 1d
1- yes that index was created (in the syslog inputs file)
2- i think so, that search yields the following:
03-10-2014 14:22:51.122 +0000 INFO WatchedFile - Will begin reading at offset=55741583 for file='/var/syslog/bit9/bit9.log'.
3- not sure how to do this?
i think the timestamping should be ok, its a basic %b %d %Y %I:%M:%S timestamp format, so my props is simple as well.
i restarted the agent on the forwarder again and i can see watchedfile path entry on my search. yet still no real data even tho the bit9.log file is constantly being written to.
Once you confirm the file is being read and tagged, try to search for all-time or all-time-real-time when indexing the next file. Perhaps there is also a timestamping issue.
1) Does that sep_bit9 index exist on the indexer?
2) Was the file read?
index=_internal WatchedFile filepath
index=_internal TailingProcessor [filepath]
3) Turn on debug logging for TailingProcessor and WatchedFile
also, i restarted the agent on the forwarder and i can see its monitoring the correct path:
]$ tail -f /opt/splunkforwarder/var/log/splunk/splunkd.log | grep monitor
03-10-2014 14:22:47.525 +0000 INFO TailingProcessor - Parsing configuration stanza: monitor:///var/syslog/bit9/*.log.
Hey,
yes, all the other inputs are sending data just fine.
I simply added this new one by adding a monitor stanza to the syslog inputs file, a props stanza to the syslog props file and reloading the syslog forwarders class in server class. to be thorough, i also reloaded syslog all_search and syslog all_indexer classes as well.
not too sure about how to check the file input status however, the btool command shows the correct inputs file is loaded.
also can you explain what and how i would use the script?
A couple of questions:
- Are any other inputs from this forwarder making it to the indexer? Otherwise this might be an output issue rather than input issue.
- Did you check the status of the file input? See amrit's excellent script here: http://blogs.splunk.com/2011/01/02/did-i-miss-christmas-2/ alternatively look at the endpoint it uses directly: https://