All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello @jkat54! I'm having some trouble getting the app to work, and the ultimate goal is to be able to change the ownership of searches automatically (e.g. from a scheduled report). Here is the sea... See more...
Hello @jkat54! I'm having some trouble getting the app to work, and the ultimate goal is to be able to change the ownership of searches automatically (e.g. from a scheduled report). Here is the search: ``` get all info about the searches on the instance ``` | rest /services/saved/searches splunk_server=local ``` exclude every search where are from user “user” , are disabled and they come only from app search ``` | search eai:acl.owner!="user2 " disabled = 0 eai:acl.app = "search" | rename eai:acl.owner as owner, eai:acl.app as app, eai:acl.sharing AS sharing ```extract the management port and the search name already urlencoded ``` | rex field=id "^\S+(?<mngmport>\:\d+)\/servicesNS\/\S+\/saved\/searches\/(?<search_name>\S+)$" ``` buid the uri for the curl mngmport =: mngmport ``` | eval url = https:// + splunk_server + mngmport +"/servicesNS/"+ owner +"/"+ app +"/saved/searches/"+ search_name +"/acl" ``` future use, not yet implemented ``` | eval description = description + " - moved from " + owner ``` constructing data= {"owner":"user2","sharing":"global"} ``` | eval data = json_object("owner", "user2", "sharing", sharing) ``` debug & Co ``` | table splunk_server app owner title description disabled action.notable cron_schedule url data id sharing * ``` the curl, which isn't working/ i'm probably doing something wrong here ``` | curl urifield=url method="post" splunkauth="true" debug=true datafield=data | table curl*   I've tried to specify the cert in some way, but it seems that there are no args that I can pass for it. Since I can't find a solution to this (searching online I found a suggestion to bypass ssl inspection, but in my case I don't think I can solve it with that), I'm here to ask for help. I prefer to avoid using simple authentication (user:password). The error I get is from the curl_message field: HTTPSConnectionPool(host='host', port=8089): Max retries exceeded with url: /servicesNS/user1/search/saved/searches/dummy%20search/acl (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1143)'))) curl_status: 408 Thanks in advance!
Hi. If your splunk version is 9.1 or higher, please refer to the case below. You can solve it by setting the option below in server.conf to false. > https://splunk.my.site.com/customer/s/article/P... See more...
Hi. If your splunk version is 9.1 or higher, please refer to the case below. You can solve it by setting the option below in server.conf to false. > https://splunk.my.site.com/customer/s/article/PreforkedSearchProcessException-can-t-launch-new-search-process-because-pool-is-full However, since the default setting is true, it is recommended to contact splunk support and decide. [general] enable_search_process_long_lifespan = false
Hi @indhumathys  The error indicates the DB agent cannot establish a connection with the AppDynamics Controller, likely due to network issues, incorrect configuration, or SSL/TLS problems. Verify... See more...
Hi @indhumathys  The error indicates the DB agent cannot establish a connection with the AppDynamics Controller, likely due to network issues, incorrect configuration, or SSL/TLS problems. Verify the Controller URL and port in controller-info.xml or startup parameters are correct (grace202504080038013.saas.appdynamics.com:443). Ensure the Controller is reachable from the DB agent host (test with telnet or curl). e.g.  curl -v https://grace202504080038013.saas.appdynamics.com​ Check if your Java version supports the required TLS version (Java 8u202 supports TLS 1.2 by default). Validate the account name, access key, and application name in the agent configuration. If using a proxy, ensure proxy settings are correctly configured. Check out the following docs for more info: https://docs.appdynamics.com/appd/24.x/latest/en/database-visibility/administer-the-database-agent/install-the-database-agent   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@ramuzzini- Glad to hear that you are able to resolve the issue. Please kindly click on my answer with "Accept as Solution" so that future Splunk users can get benefited from it as they see it soluti... See more...
@ramuzzini- Glad to hear that you are able to resolve the issue. Please kindly click on my answer with "Accept as Solution" so that future Splunk users can get benefited from it as they see it solution that worked for you.
Yes. s2s over http works relatively OK in a standard environment and over proxies/LB and such (that's why it was introduced I think - it's way easier for customers to allow outgoing http traffic to C... See more...
Yes. s2s over http works relatively OK in a standard environment and over proxies/LB and such (that's why it was introduced I think - it's way easier for customers to allow outgoing http traffic to Cloud than to open ports to some unknown protocol - raises much less questions). But there is no guarantee that it will work when you try to manipulate the payload.
I think I haven't put the correct question. So right now we have a deployment server will need to migrate it into Cloud.  I think the possibly biggest concern for us is the network latency in Cloud f... See more...
I think I haven't put the correct question. So right now we have a deployment server will need to migrate it into Cloud.  I think the possibly biggest concern for us is the network latency in Cloud from/to the target endpoints. Is there a search query that can help and check the network traffic actively flowing?   
No fields have the group option checked! I've started adding a UID to all requests which has fixed the issue, would like to know if there is a setting somewhere else though 
Hi Yes, creating a service account in Splunk Cloud is the same as creating a local user via Settings > Users > Create User. Roles: Assign the minimum privileged role for the service account own an... See more...
Hi Yes, creating a service account in Splunk Cloud is the same as creating a local user via Settings > Users > Create User. Roles: Assign the minimum privileged role for the service account own and manage the required knowledge objects, run searches etc. Optionally, create a custom role dedicated for the service user. App context: Ensure the role has write permissions on relevant apps where knowledge objects reside. This service account will then be a stable owner for orphaned knowledge objects, avoiding future orphaning if admins or users who own the KOs were to leave. Use a strong, unique password and store it securely. Document the account purpose and ownership internally. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am also facing server error while creating alert. why??
Hi @Dimpo  Unfortunately it isnt possible to change details like your name via the community system - these settings are synced from your Splunk.com account - To change these please contact support ... See more...
Hi @Dimpo  Unfortunately it isnt possible to change details like your name via the community system - these settings are synced from your Splunk.com account - To change these please contact support via https://www.splunk.com/en_us/customer-success/support-programs.html  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks for the explanation!  Sending s2s over http works fine for us, and especially since log data can't be manipulated in-flight.   
I mean that normally HEC input expects separate events in one of supported formats and you can see (as long as you decode the transport layer) the payload in clear text. httpout indeed does some magi... See more...
I mean that normally HEC input expects separate events in one of supported formats and you can see (as long as you decode the transport layer) the payload in clear text. httpout indeed does some magic to send s2s over HTTP but the contents are not readable in plain and cannot easily be manipulated. S2S can send both cooked and parsed data and as far as I remember also supports some in-transit compression. It also supports acknowledging within this protocol (not on a separate endpoint as with HEC).
Thanks.  Could you elaborate? My understanding is that  [httpout] will tunnel s2s over http to a HEC on the server. This give us one-way communication possible, since the diode accepts the http-se... See more...
Thanks.  Could you elaborate? My understanding is that  [httpout] will tunnel s2s over http to a HEC on the server. This give us one-way communication possible, since the diode accepts the http-session and closes it with a "200 ok"-reply.  
Thanks, that's what I've found as well.  I did tunnel the data through a nginx reverse-proxy, and that forwarded the data as "complete" and not "chunked".  The problem is that this will change the ... See more...
Thanks, that's what I've found as well.  I did tunnel the data through a nginx reverse-proxy, and that forwarded the data as "complete" and not "chunked".  The problem is that this will change the design of the network, and will require a new approval. So any workaround that don't require design-changes would be great. //Johan 
Hi, I am a splunk admin and we are re-assigning the orphaned knowledge object to my name as a temporary solution. I need to create a service account so that I can assign the orphaned knowledge objec... See more...
Hi, I am a splunk admin and we are re-assigning the orphaned knowledge object to my name as a temporary solution. I need to create a service account so that I can assign the orphaned knowledge objects to that account. I am doing it for the first time. Could some one please  specify what roles and capacities I should assign. Also is it the same process to create a service account same as how we create a local user in splunk like Settings > Users > Create User   ps. I am on splunk cloud | version: 9.3
Im using Java 8 java version "1.8.0_202" Java(TM) SE Runtime Environment (build 1.8.0_202-b08) Java HotSpot(TM) Client VM (build 25.202-b08, mixed mode)   Downloaded DB agent from the Control... See more...
Im using Java 8 java version "1.8.0_202" Java(TM) SE Runtime Environment (build 1.8.0_202-b08) Java HotSpot(TM) Client VM (build 25.202-b08, mixed mode)   Downloaded DB agent from the Controller and extraced, while starting the DB agent it throws exception: Could not connect to the controller/invalid response from controller   Error Message ] ControllerHttpRequestResponse:25 - Fatal transport error while connecting to URL [/controller/instance/UNKNOWN_MACHINE_ID/systemagentregistration]: org.apache.http.NoHttpResponseException: grace202504080038013.saas.appdynamics.com:443 failed to respond 10 Apr 2025 10:10:52,716 WARN [DBAgent-1] RegistrationChannel:128 - Could not connect to the controller/invalid response from controller, cannot get registration information
hi @raysonjoberts  I have the same needs as you, has your problem been resolved? if so can you give me the script thank you
@vikas_kone  Is this resolved or still open. If not then you can try this, Service Analyzer → Your Service → KPI , Thresholding -> select Per-Entity Thresholds You’ll see a new table with a list ... See more...
@vikas_kone  Is this resolved or still open. If not then you can try this, Service Analyzer → Your Service → KPI , Thresholding -> select Per-Entity Thresholds You’ll see a new table with a list of all your entities. Thanks, Sujit
 
I would like the first letters of my name to be capitalised