Hi everyone,
I’m currently sending vCenter logs via syslog to Splunk and have ensured that the syslog configuration and index name on Splunk is correct. However, the logs still aren’t appearing in the index.
I have tried to tcpdump and I can see the logs arriving at my Splunk instance.
below I attach the syslog configuration and tcpdump result on my splunk instance.
What could be the cause of this issue, and what steps should I take to troubleshoot it?
Thanks for any insights!
A few questions:
Do you have a TA for the logs you are ingesting and are they set-up on all the needed Splunk components, check your DOCs?
Looking at the _internal logs, do you see that Splunk has ingested them?
Can you do a search for a string that exists in your logs across all you indexes and find any responsive logs, in the time you verified that the data was ingested?
Also, for syslog data in general it is simpler and more durable to forward data to a syslog server and have a UF monitor relevant files and then you set-up monitoring stanzas per host/data source.
[monitor://var/log...whatever]
whitelist = regex
blacklist = regex
host_segment = as needed
crcSalt = <SOURCE> {as needed}
sourcetype = syslog {or whatever you want}
index = yourIndex
Consult also: How the Splunk platform handles syslog data over the UDP network protocol - Splunk Documentation