Hi everyone,
As part of a project, I'm integrating Jira with Splunk to visualize ticket data (status, priority, SLA, etc.). I'm currently concerned about the data volume and how Splunk handles high ticket traffic.
Does anyone have experience with sending a large number of Jira tickets (thousands or more) to Splunk on a regular basis?
-Are there limits or performance issues to be aware of?
-Should I split the integration by project, or is it manageable in a single pipeline?
-Are there any best practices for optimizing ingestion and storage in Splunk in such cases?
Any insights or shared experiences would be highly appreciated. Thanks in advance!
It's not clear what you want to achieve but:
1. Volumewise Splunk environments (if properly designed) can process volumes up to petabytes per day range (of course not on a single server 😉) so "thousands" of tickets won't impress your Splunk.
2. Remember than Splunk is not your typical RDBMS so you might want to think deeply about what you want to achieve and what data you need to do it. Remember that once you ingest an event it's immutable.
Hi @Ana_Smith1
In the grand scheme of the data ingestion that Splunk deals with, I think you'd be hard-pushed to have any issues with importing thousands of Jira tickets per day, even if these were 20,000 characters long you would only be looking in the region of 20 megabytes of data for 1000 tickets. A single Splunk indexer is typically capable of 300 gigabytes of ingestion per day.
On the other hand, I would recommend trying first with a limited dataset and then expanding out if you are concerned.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing