All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The url is  "http://127.0.0.1:8088" in log4j2  and localhost(splunk) is running on  port 8000.Whereas the project listener is 8081 port. Yes i have enabled ssl. Most documentation have the same set... See more...
The url is  "http://127.0.0.1:8088" in log4j2  and localhost(splunk) is running on  port 8000.Whereas the project listener is 8081 port. Yes i have enabled ssl. Most documentation have the same setting so i followed the same ,yet cannot see the logs.
Hi @fhatrick  Splunk HEC typically listens on port 8088 - Have you changed this default port to something else? Have you enabled SSL for HEC? If not you will need to use http:// instead of https:// ... See more...
Hi @fhatrick  Splunk HEC typically listens on port 8088 - Have you changed this default port to something else? Have you enabled SSL for HEC? If not you will need to use http:// instead of https://  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Ana_Smith1  In the grand scheme of the data ingestion that Splunk deals with, I think you'd be hard-pushed to have any issues with importing thousands of Jira tickets per day, even if these were... See more...
Hi @Ana_Smith1  In the grand scheme of the data ingestion that Splunk deals with, I think you'd be hard-pushed to have any issues with importing thousands of Jira tickets per day, even if these were 20,000 characters long you would only be looking in the region of 20 megabytes of data for 1000 tickets. A single Splunk indexer is typically capable of 300 gigabytes of ingestion per day. On the other hand, I would recommend trying first with a limited dataset and then expanding out if you are concerned.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Ana_Smith1  You can use the "Jira Issue Input Add-on" app at  https://splunkbase.splunk.com/app/6168 which allows you to run JQL against your Jira instance to pull down tickets based on your sea... See more...
Hi @Ana_Smith1  You can use the "Jira Issue Input Add-on" app at  https://splunkbase.splunk.com/app/6168 which allows you to run JQL against your Jira instance to pull down tickets based on your search criteria.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi everyone, As part of a project, I'm integrating Jira with Splunk to visualize ticket data (status, priority, SLA, etc.). I'm currently concerned about the data volume and how Splunk handles high ... See more...
Hi everyone, As part of a project, I'm integrating Jira with Splunk to visualize ticket data (status, priority, SLA, etc.). I'm currently concerned about the data volume and how Splunk handles high ticket traffic. Does anyone have experience with sending a large number of Jira tickets (thousands or more) to Splunk on a regular basis? -Are there limits or performance issues to be aware of? -Should I split the integration by project, or is it manageable in a single pipeline? -Are there any best practices for optimizing ingestion and storage in Splunk in such cases? Any insights or shared experiences would be highly appreciated. Thanks in advance!
Hello PickleRick, The architecture is simple: I have UniversalForwarders on around 30 servers with /opt/splunkforwarder/etc/apps/druid_forwarder/default/inputs.conf (contents is in the first post) a... See more...
Hello PickleRick, The architecture is simple: I have UniversalForwarders on around 30 servers with /opt/splunkforwarder/etc/apps/druid_forwarder/default/inputs.conf (contents is in the first post) and then I have 1 indexer with /opt/splunk/etc/apps/druid_utils/default/props.conf (contents is in the first post). The inputs.conf is only on the universal forwarder(s) while the props.conf is only on the indexer.
I’m working on a project that requires integrating Jira with Splunk to collect ticket data (such as status, priority, and SLA information) and visualize it in real-time dashboards. What are the best ... See more...
I’m working on a project that requires integrating Jira with Splunk to collect ticket data (such as status, priority, and SLA information) and visualize it in real-time dashboards. What are the best practices or tools for doing this efficiently, especially across multiple Jira projects?
i think the problem itself in indexer node, but still cant find out why it can query splunk internal log
first i have 3 different server (HF, SH, and IDX) and the distributed search is going to IDX. there an incident that idx server is shutting down and after i started and run the splunk services, i can... See more...
first i have 3 different server (HF, SH, and IDX) and the distributed search is going to IDX. there an incident that idx server is shutting down and after i started and run the splunk services, i can't query any data. i try to query index = * and has no result.
Doesn't Powerconnect need a paid SAP addon? cause he said in the post how to monitor for free.
already done this, since splunk has to run using user splunk sir so when i want to start the service i already change the permissions
Hi It's like @livehybrid said. You cannot / shouldn't try this that way. Basically there are two options to do this depending how your data is collected and where it's created. In SCP side you can... See more...
Hi It's like @livehybrid said. You cannot / shouldn't try this that way. Basically there are two options to do this depending how your data is collected and where it's created. In SCP side you can set Federated Search in your SCP and use it to access data from another SCP stack. See more https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/FederatedSearch/fsoptions. The second option is replicate data before you send it into SCP stack. E.g. you could set your own HFs where you can set this. r. Ismo
Can you tell more about what and how you have done this installation and what kind of distributed environment you have? Are the problematic node indexer, search head or something other node?
@arsidiq  Verify permissions for Splunk directories. If they've changed to root after a reboot, correct them with: chown -R splunk:splunk /opt/splunk Are you able to see the data for other indexes... See more...
@arsidiq  Verify permissions for Splunk directories. If they've changed to root after a reboot, correct them with: chown -R splunk:splunk /opt/splunk Are you able to see the data for other indexes? 
@arsidiq  Refer this  Solved: Why is no data being written to the _internal inde... - Splunk Community Solved: Why is _internal index is disabled? - Splunk Community  
@arsidiq    Verify that the search head can communicate with the indexer. If it fails, check firewall rules or network issues. Ensure the indexer is listed in the search head’s distributed sea... See more...
@arsidiq    Verify that the search head can communicate with the indexer. If it fails, check firewall rules or network issues. Ensure the indexer is listed in the search head’s distributed search configuration:   Splunk Web: Settings > Distributed Search > Search Peers. Or check $SPLUNK_HOME/etc/system/local/distsearch.conf. Check this on the indexer:-  tail -n 100 /opt/splunk/var/log/splunk/splunkd.log
yups the indexer is running, and still cant quey any data after the server has been reboot
My url is "http://127.0.0.1:8000" in log4j2 and localhost(splunk) is running on same port. Whereas the listener is 8081 port. Earlier the url was  "http://127.0.0.1:8088" in log4j2 localhost(splunk)... See more...
My url is "http://127.0.0.1:8000" in log4j2 and localhost(splunk) is running on same port. Whereas the listener is 8081 port. Earlier the url was  "http://127.0.0.1:8088" in log4j2 localhost(splunk) is running on  port 8000.Whereas the listener is 8081 port.
is anyone know how to disable this input?
@arsidiq  Ensure the indexer is running. Log into the indexer server and check Splunk's status: /opt/splunk/bin/splunk status If Splunk is not running, start it: /opt/splunk/bin/splunk start C... See more...
@arsidiq  Ensure the indexer is running. Log into the indexer server and check Splunk's status: /opt/splunk/bin/splunk status If Splunk is not running, start it: /opt/splunk/bin/splunk start Confirm that the search head and other components can communicate with the indexer. Test connectivity using: ping <indexer_ip> Verify that the Splunk management port (default: 8089) is open: telnet <indexer_ip> 8089 Check the Splunk logs on the indexer for errors: /opt/splunk/var/log/splunk/splunkd.log Look for issues related to indexing, disk space, or corrupted buckets. Common issues include: Disk full errors or Corrupted index buckets due to improper shutdown.