Splunk Search

Splunk/AWS SQS Queue polling inconsistently

arlombar
Explorer

As the title says im running into an issue with what appears to be the pull count from SQS queues. For example, right now we have 3 different SQS queues and 15 inputs per queue to addresses all of the regions they encompass. According to the AWS Add-on docs this appears to be a recommended way to scale and increase throughput. However, in our production environment, we have a backlog of events in one of the SQS queues and it doesn't look like Splunk is able to process all of these events and is stuck in a limbo state where it is pulling in events, but not able to catch up with the most recent events. According to the docs SQS queue EPS in Splunk should be 670 however the numbers I am pulling from _internal are significantly less. Has anyone ever run into an issue like this or addresses a similar SQS issue?

Tags (1)
0 Karma
Get Updates on the Splunk Community!

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Get Inspired! We’ve Got Validation that Your Hard Work is Paying Off

We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of ...

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...