Hello
The addon configured for AWS runs form 3 HFs to get the data from SQS queue, however on the SQS, the Messages Available" grows to 999K+ and is not getting cleared. "Messages in Flight" appears to be around 30
Tried to increase the interval to 20 secs on the CloudTrail Input to see if that helps, but it did not.
The Queue still grows, dont see any errors on the splunk_ta_aws_cloudtrail_main.log
"processing 20 records in s3:logs*/AWSLogs/..json.gz"
"fetched 20 records, wrote 20, discarded 0, redirected 0 from s3:logs/AWSLogs/*..json.gz"
Any suggestions on how to ensure the Queue is read to clear the Messages Available
Thanks
Hi @ajith_sukumaran ,
In order to avoid the situation of SQS getting clogged , use more input pipelines from the HF on the same SQS (on the existing inputs, select clone and change the polling period to 90seconds), once the sqs queue is grabbed by one consumer(input) it will not be available for other , so you are increasing the ingestion levels by this method, you can grow as big as you want but make sure your HF resources are not fully throttled by the input processing. ( as its parallel processing)
hope this helps , thanks
Hi @ajith_sukumaran ,
In order to avoid the situation of SQS getting clogged , use more input pipelines from the HF on the same SQS (on the existing inputs, select clone and change the polling period to 90seconds), once the sqs queue is grabbed by one consumer(input) it will not be available for other , so you are increasing the ingestion levels by this method, you can grow as big as you want but make sure your HF resources are not fully throttled by the input processing. ( as its parallel processing)
hope this helps , thanks
Thanks. This is exactly the suggested solution later found out from Splunk too.
Thus the config would look as:
eg:
[aws_cloudtrail://AWSCloudTrailData]
sqs_queue = AWS-Splunk
[aws_cloudtrail://AWSCloudTrailData0]
sqs_queue = AWS-Splunk