Hi,
Splunk Enterprise latest
New to splunk. Ingesting from some appliances via Syslog on a UDP port. All is fine for INGESTING logs. Event numbers are actively increasing, however, when I go into "Search", it has completely stopped.
For example, I have 50k events and a latest update of 10:52. I click on the data source "udp:9006" and the last event shown is from 10:30. Things were working great and in real time up until 10:30, then it just stops completely.
Any ideas? Thanks
Hi @law175 ,
have you some message from Splunk?
the behavior you described is the one of violation: this occurs if you index more logs of your daily quota for more than 2 times in 30 solar days on a Trial License or more than 45 times in 60 solar days for a Term License.
Check in the [Settings > License] page.
If you're not in violation there could be another situation:
are you using the index in your main search?
In other words, try to add index=your_index or index=* at the beginning of your search because your index isn't in the default search path.
Third possibility: have you the rights to read data from that index?
Ciao.
Giuseppe
Licensing is fine.
I switch to TCP from UDP input from the SAME source and everything is fine. All the logs are ingested and indexed and are searchable properly. UDP seems to be the issue.
I am using the admin account created during install. Splunk installed on windows server with admin privileges.
It is just a single head. No cluster or separate indexers.
Hi @law175,
a very stupid question: you changed protocol from from UDP to TCP, did you changed the input stanza?
which search are you using?
Ciao.
Giuseppe
Yes what I do is do Data Inputs > TCP > New Local TCP > Port 9008 TCP / Source type = Syslog / Method = IP / App Context = search and reporting / Index = default.
If I switch, I delete the TCP and add in a new UDP with the same settings.
Right now I have a TCP 9008 and a UDP 9004 running, with the same appliance forwarding logs via Syslog to both TCP and UDP ports. TCP has been working fine for 4 days now (though with lots of dropped events). Appliance sends logs fine from appliance via UDP, but Splunk has stopped searching within 1 hour.
The same log appears as the last shown log. It is a memory output that breaks down over around 50 lines (assuming it is too big). Could there be some error occuring with a sylsog that is too big that is breaking searching?
The search I am using is source="udp:9004"
Hi @law175,
what's the result running
index=your_index host=192.168.79.1
have you one or two sources?
Ciao.
Giuseppe
For now just one.
All logs are being forwarded to a logging server (VMware vRealize Log Insight). Then I am sending logs via syslog from that appliance to Splunk. All logs should come from that 192.168.79.1 on either UDP:9004 or TCP:9008, depending on what I choose.
UDP. The picture is UDP. I am sendings logs via Syslog on port UDP:9004.
I only opened TCP:9008 for testing purposes. Everything works on TCP as expected.
I want to fix UDP.
Hi @law175,
let me understand: are you sending UDP9008 or TCP9008 or both?
which ones you whould have?
which ones are you receiving?
Ciao.
Giuseppe
UDP. Just UDP. I only did TCP for testing purposes. I only want to receive UDP.
I switched the time from (past 5 minutes) to All Time (real time) and logs are appearing. There is an issue with how Splunk is processing Time from these logs it seems.
Timestamp correct. Timestamp received by Splunk matches what is recorded on the appliance.
All time (real-time) is the only search that allows me to see logs. Searching any other time does not work.
For example, look at pictures below. Searching in previous 5 minutes shows no logs. I switch to All time (real-time) and all logs are being shown.