Looks like I have a malformed record in Kafka, I assume that it will keep trying to post the invalid events until the data ages out of the topic in some number of days? I still see data flowing from Kafka into HEC, so I'm not sure if this is a problem or not. I'd like to be sure one way or the other.
Jun 18 10:16:56 a0001p5rlog0003 messages [2019-06-18 12:16:23,464] ERROR failed to post events resp={"text":"Invalid data format","code":6,"invalid-event-number":8,"ackId":7672}, status=400 (com.splunk.hecclient.Indexer:181)
Jun 18 10:16:56 a0001p5rlog0003 messages [2019-06-18 12:16:23,465] INFO add 1 failed batches (com.splunk.kafka.connect.SplunkSinkTask:322)
Jun 18 10:16:56 a0001p5rlog0003 messages [2019-06-18 12:16:23,465] INFO total failed batches 1 (com.splunk.kafka.connect.SplunkSinkTask:47)
Jun 18 10:16:56 a0001p5rlog0003 messages [2019-06-18 12:16:23,465] INFO handled 1 failed batches with 9 events (com.splunk.kafka.connect.SplunkSinkTask:130)
The root cause of this is that HEC is rejecting specific log entries because they are too large. I have opened an additional question to figure out how to fix that.
Any way to purge these events before they age out ?