All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Alan_Chan  The issue of an extremely large search job size can be caused by several factors. To troubleshoot, you should first check the search query that's causing the large job size. | rest /... See more...
Hi @Alan_Chan  The issue of an extremely large search job size can be caused by several factors. To troubleshoot, you should first check the search query that's causing the large job size. | rest /services/search/jobs | search dispatchState="DONE" AND isFinalized=0 | sort - runDuration | table sid, label, runDuration, scanCount, resultCount, diskUsage | rename label as "Search Query" This SPL will list the recent search jobs, sorted by their run duration, and provide details such as the search query, scan count, result count, and disk usage. The limits.conf you've provided only contains one setting: read_final_results_from_timeliner = 1. This setting is related to how Splunk reads final results, but it doesn't directly explain the large search job size. To mitigate large search job sizes, consider optimising your search queries to reduce the amount of data being processed and returned. Use | stats or other transforming commands early in your search to reduce data volume. Limit the time range of your searches. - Avoid using * or overly broad field names in your searches.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
It looks as if the app-functions "Submit indicator" will be able to solve this for us:  { "indicatorValue": "9.9.9.9", "indicatorType": "IpAddress", "action": "Block", "title": "Block ou... See more...
It looks as if the app-functions "Submit indicator" will be able to solve this for us:  { "indicatorValue": "9.9.9.9", "indicatorType": "IpAddress", "action": "Block", "title": "Block outbound traffic to 9.9.9.9", "description": "Referanse: JIRA-XYZ", "generateAlert": true }
Hi @Punnu  Are you running the search manually from the same app context as the dashboard?  It could be that certain field extractions etc work differently if there is an app-context field eval, fo... See more...
Hi @Punnu  Are you running the search manually from the same app context as the dashboard?  It could be that certain field extractions etc work differently if there is an app-context field eval, for example. Are there any warnings when running it in the UI via the search bar?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Ahh yes, my apologies I got my wires crossed too. I'll update the original python sample incase others want to use it. Glad you've been able to make some progress on this! If you get a moment pleas... See more...
Ahh yes, my apologies I got my wires crossed too. I'll update the original python sample incase others want to use it. Glad you've been able to make some progress on this! If you get a moment please consider marking one of the posts as an accepted solution and/or adding Karma  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid    Thanks for the reply, it was as I also could read in the docs, but nice to get it confirmed.   Kind regards las
We found that the search job size becomes extremely large during searches. My Splunk instance is a newly installed testing lab with the following limits.conf only. Anyone else have any idea on this ... See more...
We found that the search job size becomes extremely large during searches. My Splunk instance is a newly installed testing lab with the following limits.conf only. Anyone else have any idea on this situation? /opt/splunk/etc/system/local/limits.conf [search] read_final_results_from_timeliner = 1 Update: When I change the read_final_results_from_timeliner = 1 to read_final_results_from_timeliner = true The Job size reduced and I don't know why.    
@livehybrid    Yeah got it, after replying to your comment checked the docs again. The last portion on the URL /roles should be removed and it worked.   Thanks a lot for this. In the python scri... See more...
@livehybrid    Yeah got it, after replying to your comment checked the docs again. The last portion on the URL /roles should be removed and it worked.   Thanks a lot for this. In the python script I am getting connection timed out error, but it is working fine in the browser. I am trying from the same domain only, getting connection timed out error for both Python and curl method.   Any idea what would have been missed.
We're looking to block outgoing traffic from a specific client or group, using the Microsoft Defender for Endpoint-app. If we were to implement this ourselves using the MS api, it would be something... See more...
We're looking to block outgoing traffic from a specific client or group, using the Microsoft Defender for Endpoint-app. If we were to implement this ourselves using the MS api, it would be something like: POST https://api.securitycenter.microsoft.com/api/machines/{machineId}/restrict Authorization: Bearer {your_access_token} Content-Type: application/json { "action": "Block", "destination": "IP_ADDRESS_OR_DOMAIN", "protocol": "TCP", "port": "443" } However, I haven't been able to find a corresponding call in the app source code. Am I missing something, or isn't this currently supported?
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the Splunk IT Service Intelligence app, but I saw that the download is restricted to authoriz... See more...
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the Splunk IT Service Intelligence app, but I saw that the download is restricted to authorized users. How do I get authorized?
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the PowerConnect for SAP solution app, but I saw that the download is restricted to authorize... See more...
Good morning, we recently installed SAP architecture in our infrastructure and we would need to download the PowerConnect for SAP solution app, but I saw that the download is restricted to authorized users. How do we get authorized?
Hi @ArunkumarKarmeg  This is it within the docs, are you able to see this page? https://docs.appdynamics.com/appd/24.x/latest/en/extend-splunk-appdynamics/splunk-appdynamics-apis/rbac-api#id-.RBACA... See more...
Hi @ArunkumarKarmeg  This is it within the docs, are you able to see this page? https://docs.appdynamics.com/appd/24.x/latest/en/extend-splunk-appdynamics/splunk-appdynamics-apis/rbac-api#id-.RBACAPIv25.1-GetUserbyID    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
No. They are not being broken. The SEDCMDs are being applied and apparently it removes part of the event data so the remaining data sometimes happens to be valid JSON and sometimes isn't. But it has ... See more...
No. They are not being broken. The SEDCMDs are being applied and apparently it removes part of the event data so the remaining data sometimes happens to be valid JSON and sometimes isn't. But it has nothing to do with event breaking.
It seems that you misunderstood my question a bit. Every app has a deployer_push_mode setting, and as I said, the affected apps use deployer_push_mode = local_only.
Hi @Harikiranjammul  Edit - I see there have been other posts since I opened this thread but hopeful;ly this helps someone in the future or provides another option! How about the following? | tsta... See more...
Hi @Harikiranjammul  Edit - I see there have been other posts since I opened this thread but hopeful;ly this helps someone in the future or provides another option! How about the following? | tstats count WHERE earliest=-24h@h latest=@h index=* AND source=* BY _time span=2h, index, source | stats dc(_time) as num_buckets BY index, source |where num_buckets = 12    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Na_Kang_Lim  Within your deployment server, have you checked the option to "Restart Splunkd" ? If you tick this option then Splunk UF will restart after installing (and uninstalling) the app, an... See more...
Hi @Na_Kang_Lim  Within your deployment server, have you checked the option to "Restart Splunkd" ? If you tick this option then Splunk UF will restart after installing (and uninstalling) the app, and thus should clear the cache/existence of the input after the DS uninstalls the app.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
HI @SN1  This largely depends on the implementation of your dashboard - Please could you share your existing dashboard code so that we can try and make this work for you.  Did this answer help yo... See more...
HI @SN1  This largely depends on the implementation of your dashboard - Please could you share your existing dashboard code so that we can try and make this work for you.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
so i have a dashboard with 4 panels and there is checkbox with 2 options of solved and unsolved , so for unsolved the colour of the panels should remain red when the count is greater than 0. which i ... See more...
so i have a dashboard with 4 panels and there is checkbox with 2 options of solved and unsolved , so for unsolved the colour of the panels should remain red when the count is greater than 0. which i am able to do with splunk dashboard feature itself. But for solved option every panels should be green . so how should i approach this.
@livehybrid , Thanks much for your prompt response on this.   The API shared for getting roles assigned to individual users is not working and also I am not able to find this API in the document. T... See more...
@livehybrid , Thanks much for your prompt response on this.   The API shared for getting roles assigned to individual users is not working and also I am not able to find this API in the document. There is only APIs for getting complete Roles and not specific to user   GET /controller/api/rbac/v1/users/{userId}/roles - to retrieve the roles associated with a specific user   Could you check and let me know on this.     
I have this kind of weird custom app (and dangerous too) that changes the UF Instance GUID.  Basically, I created a .sh file, which utilizes "sed" command on Linux, to change the UUID value of the /o... See more...
I have this kind of weird custom app (and dangerous too) that changes the UF Instance GUID.  Basically, I created a .sh file, which utilizes "sed" command on Linux, to change the UUID value of the /opt/splunkforwarder/etc/instance.cfg file. To use a .sh script and make changes to SPLUNK_HOME directory is quite a dangerous task, I advised not to, however, this task is quite simple, I tested so I decided to deploy an app called REGEN_GUID with a single inputs.conf file that have the stanza to run the script. [script://./bin/regenerate_guid.sh] interval = 900 source = regenerate_guid sourcetype = regenerate_guid index = <REDACTED> disabled = 0 In general, quite simple, and it run. I could change the instance UUID and nothing critical happened. However, of course after I see that the UUID has been changed, I would remove the client from the app. I used the deployment server UI, go into the app section and remove the IP of the instance from the whitelist. Checking the splunkd.log, I could see the log when it say it is removing the app However, after that, I check again and see the log and see it is still finding the script to run, the log appear every 15 minutes, which is equal to the script interval, so basically the UF is still interpreting the task of running the script. The log is like this: 05-07-2025 11:00:07.938 +0700 ERROR ExecProcessor - message from "/opt/splunkforwarder/etc/apps/REGEN_GUID/bin/regenerate_guid.sh" /bin/sh: 1: /opt/splunkforwarder/etc/apps/REGEN_GUID/bin/regenerate_guid.sh: not found Does anyone know the reason? I think the reason might be the way Splunk monitor script inputs is through some kinds of cron file, and my app failed to update that when it was removed?
You may want to check this out too.   https://community.splunk.com/t5/Getting-Data-In/HEC-timestamp-recognition/m-p/537762