All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK, I successfully did a second attempt. There were 2 issues (which combined threw me off the track): - There was a uncommitted rolechange for a server to be searchhead (Monitoring Console -> gen... See more...
OK, I successfully did a second attempt. There were 2 issues (which combined threw me off the track): - There was a uncommitted rolechange for a server to be searchhead (Monitoring Console -> general settings). That (so I assume), has been applied on the first activity. And later been reverted before the second try. - When changing the servername of the Manager Node, the searchheads aren't able to verify the connection anymore (name of the certstore does not fit anymore). Therefor they go missing in the monitoring console. You have to reset/reapply the connection credentials (settings -> distributed search -> search peers). I think the 1st issue resulted in the SHC to go offline. Because this did not happen on the second implementation. Problem solved.
@livehybrid    Thanks again. Karma added to the post     Could you help or suggest on the connection time out error with the python script.
The edited app.conf has additional line indicating the fact that app was configured: [install] is_configured = true state = enabled allows_disable = false install_source_checksum = <checksum>    
Hi @MatiaFazio  Splunk ITSI is a licensed premium app which requires an additional license above a standard Splunk Ingest/SVC license. Has ITSI been included as part of your license with Splunk? Th... See more...
Hi @MatiaFazio  Splunk ITSI is a licensed premium app which requires an additional license above a standard Splunk Ingest/SVC license. Has ITSI been included as part of your license with Splunk? The account team should enable the download for your account once an ITSI license has been issued. Either way I think the best approach here is to reach out to your Splunk account team to discuss as we arent able to share restricted apps.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @MatiaFazio  The PowerConnect for SAP app needs to be approved for download againt your Splunkbase account, this app is managed by a PowerConnect themselves, so you may need to contact them via h... See more...
Hi @MatiaFazio  The PowerConnect for SAP app needs to be approved for download againt your Splunkbase account, this app is managed by a PowerConnect themselves, so you may need to contact them via https://www.powerconnect.io/contact-us/ to be granted access to download the app on Splunkbase. Alternatively I believe that Splunk can license this app out, so its probably worth checking with your Splunk account team/manager first to see if you are already licensed for this and see if they are able to help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Alan_Chan  The issue of an extremely large search job size can be caused by several factors. To troubleshoot, you should first check the search query that's causing the large job size. | rest /... See more...
Hi @Alan_Chan  The issue of an extremely large search job size can be caused by several factors. To troubleshoot, you should first check the search query that's causing the large job size. | rest /services/search/jobs | search dispatchState="DONE" AND isFinalized=0 | sort - runDuration | table sid, label, runDuration, scanCount, resultCount, diskUsage | rename label as "Search Query" This SPL will list the recent search jobs, sorted by their run duration, and provide details such as the search query, scan count, result count, and disk usage. The limits.conf you've provided only contains one setting: read_final_results_from_timeliner = 1. This setting is related to how Splunk reads final results, but it doesn't directly explain the large search job size. To mitigate large search job sizes, consider optimising your search queries to reduce the amount of data being processed and returned. Use | stats or other transforming commands early in your search to reduce data volume. Limit the time range of your searches. - Avoid using * or overly broad field names in your searches.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
It looks as if the app-functions "Submit indicator" will be able to solve this for us:  { "indicatorValue": "9.9.9.9", "indicatorType": "IpAddress", "action": "Block", "title": "Block ou... See more...
It looks as if the app-functions "Submit indicator" will be able to solve this for us:  { "indicatorValue": "9.9.9.9", "indicatorType": "IpAddress", "action": "Block", "title": "Block outbound traffic to 9.9.9.9", "description": "Referanse: JIRA-XYZ", "generateAlert": true }
Hi @Punnu  Are you running the search manually from the same app context as the dashboard?  It could be that certain field extractions etc work differently if there is an app-context field eval, fo... See more...
Hi @Punnu  Are you running the search manually from the same app context as the dashboard?  It could be that certain field extractions etc work differently if there is an app-context field eval, for example. Are there any warnings when running it in the UI via the search bar?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Ahh yes, my apologies I got my wires crossed too. I'll update the original python sample incase others want to use it. Glad you've been able to make some progress on this! If you get a moment pleas... See more...
Ahh yes, my apologies I got my wires crossed too. I'll update the original python sample incase others want to use it. Glad you've been able to make some progress on this! If you get a moment please consider marking one of the posts as an accepted solution and/or adding Karma  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid    Thanks for the reply, it was as I also could read in the docs, but nice to get it confirmed.   Kind regards las
We found that the search job size becomes extremely large during searches. My Splunk instance is a newly installed testing lab with the following limits.conf only. Anyone else have any idea on this ... See more...
We found that the search job size becomes extremely large during searches. My Splunk instance is a newly installed testing lab with the following limits.conf only. Anyone else have any idea on this situation? /opt/splunk/etc/system/local/limits.conf [search] read_final_results_from_timeliner = 1 Update: When I change the read_final_results_from_timeliner = 1 to read_final_results_from_timeliner = true The Job size reduced and I don't know why.    
@livehybrid    Yeah got it, after replying to your comment checked the docs again. The last portion on the URL /roles should be removed and it worked.   Thanks a lot for this. In the python scri... See more...
@livehybrid    Yeah got it, after replying to your comment checked the docs again. The last portion on the URL /roles should be removed and it worked.   Thanks a lot for this. In the python script I am getting connection timed out error, but it is working fine in the browser. I am trying from the same domain only, getting connection timed out error for both Python and curl method.   Any idea what would have been missed.
We're looking to block outgoing traffic from a specific client or group, using the Microsoft Defender for Endpoint-app. If we were to implement this ourselves using the MS api, it would be something... See more...
We're looking to block outgoing traffic from a specific client or group, using the Microsoft Defender for Endpoint-app. If we were to implement this ourselves using the MS api, it would be something like: POST https://api.securitycenter.microsoft.com/api/machines/{machineId}/restrict Authorization: Bearer {your_access_token} Content-Type: application/json { "action": "Block", "destination": "IP_ADDRESS_OR_DOMAIN", "protocol": "TCP", "port": "443" } However, I haven't been able to find a corresponding call in the app source code. Am I missing something, or isn't this currently supported?
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the Splunk IT Service Intelligence app, but I saw that the download is restricted to authoriz... See more...
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the Splunk IT Service Intelligence app, but I saw that the download is restricted to authorized users. How do I get authorized?
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the PowerConnect for SAP solution app, but I saw that the download is restricted to authorize... See more...
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the PowerConnect for SAP solution app, but I saw that the download is restricted to authorized users. How do we get authorized?
Hi @ArunkumarKarmeg  This is it within the docs, are you able to see this page? https://docs.appdynamics.com/appd/24.x/latest/en/extend-splunk-appdynamics/splunk-appdynamics-apis/rbac-api#id-.RBACA... See more...
Hi @ArunkumarKarmeg  This is it within the docs, are you able to see this page? https://docs.appdynamics.com/appd/24.x/latest/en/extend-splunk-appdynamics/splunk-appdynamics-apis/rbac-api#id-.RBACAPIv25.1-GetUserbyID    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
No. They are not being broken. The SEDCMDs are being applied and apparently it removes part of the event data so the remaining data sometimes happens to be valid JSON and sometimes isn't. But it has ... See more...
No. They are not being broken. The SEDCMDs are being applied and apparently it removes part of the event data so the remaining data sometimes happens to be valid JSON and sometimes isn't. But it has nothing to do with event breaking.
It seems that you misunderstood my question a bit. Every app has a deployer_push_mode setting, and as I said, the affected apps use deployer_push_mode = local_only.
Hi @Harikiranjammul  Edit - I see there have been other posts since I opened this thread but hopeful;ly this helps someone in the future or provides another option! How about the following? | tsta... See more...
Hi @Harikiranjammul  Edit - I see there have been other posts since I opened this thread but hopeful;ly this helps someone in the future or provides another option! How about the following? | tstats count WHERE earliest=-24h@h latest=@h index=* AND source=* BY _time span=2h, index, source | stats dc(_time) as num_buckets BY index, source |where num_buckets = 12    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Na_Kang_Lim  Within your deployment server, have you checked the option to "Restart Splunkd" ? If you tick this option then Splunk UF will restart after installing (and uninstalling) the app, an... See more...
Hi @Na_Kang_Lim  Within your deployment server, have you checked the option to "Restart Splunkd" ? If you tick this option then Splunk UF will restart after installing (and uninstalling) the app, and thus should clear the cache/existence of the input after the DS uninstalls the app.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing