All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @ejose  Please could you share the errors that you are receiving. Can you also confirm the certificate has not expired? The reason I ask this specifically is that a Splunk forwarder will remain ... See more...
Hi @ejose  Please could you share the errors that you are receiving. Can you also confirm the certificate has not expired? The reason I ask this specifically is that a Splunk forwarder will remain connected to another Splunk server even after an SSL cert has expired if it cannot create a new connection. In other words, its possible the certificate had previously expired but you only experienced an issue once the existing connection was closed down and you upgraded.  openssl x509 -in <PathToYourCert> -noout -dates  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Splunkie  To exclude cases that have transitioned to "Resolved" and only show currently active cases, find the latest status per case and filter where that status is "Active". | your_search_her... See more...
Hi @Splunkie  To exclude cases that have transitioned to "Resolved" and only show currently active cases, find the latest status per case and filter where that status is "Active". | your_search_here | stats latest(status) as latest_status by case_id | where latest_status="Active" stats latest(status) by case_id groups events by case and finds the most recent status update per case where latest_status="Active" filters to only cases whose latest status is still "Active" This effectively excludes cases that have been resolved or closed later Replace case_id with your actual case identifier field Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Which version of Splunk are you running? For me when I click on a "New Search" in either table view or list view I get the same behaviour, which in my example did index=_internal (Which I had sea... See more...
Hi Which version of Splunk are you running? For me when I click on a "New Search" in either table view or list view I get the same behaviour, which in my example did index=_internal (Which I had searched) and added the field I clicked.  Does it differ if you have a more complex query?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, today I have found a bug(?) in the "New Search" function from the Table view. What I do mean with the "New Search" function: Run a search and select the table view (not raw or list). Then c... See more...
Hello, today I have found a bug(?) in the "New Search" function from the Table view. What I do mean with the "New Search" function: Run a search and select the table view (not raw or list). Then click on one of the shown values of a field, for example the value of a host field, and select "New Search". Now a new search starts with the selected field+value, but instead of using the same index(es) from the view before, only a * will be used. As we do not have defined any default indexes in our environment those searches won't return any results, because no index is included ín the search. Is there a possibility how I can reconfigure this, instead of a plain asterisk?   Best Regards.
As I said - that sounds improbably but of course not impossible. Especially since it's windows I'd go for support case at this point (or just do a quick windows reinstall if you can afford it - m... See more...
As I said - that sounds improbably but of course not impossible. Especially since it's windows I'd go for support case at this point (or just do a quick windows reinstall if you can afford it - might be faster). BTW, did you verify the installer checksum before running it?
Hi, I understand your point, but I’d like to clarify a few things from my side. The issue with my D:\ drive started immediately after installing Splunk Enterprise — just before the installation, I h... See more...
Hi, I understand your point, but I’d like to clarify a few things from my side. The issue with my D:\ drive started immediately after installing Splunk Enterprise — just before the installation, I had accessed the drive without any problems. There are no EDR or third-party security solutions installed on my system, so that can be ruled out. Although the Splunk installation process appeared to complete, I can’t find it anywhere on the system — not in the Control Panel, not in the installation directory, and there’s no desktop shortcut either. It seems like something went wrong during the install, and I strongly suspect that’s what caused the issue with my drive. Appreciate your input, and I’m open to any suggestions that could help further.
Hi Friends, I am working a query that checks if the value of a field has changed to a state of resolved to exclude it from the results of active cases. The field I am trying to use to check if a ca... See more...
Hi Friends, I am working a query that checks if the value of a field has changed to a state of resolved to exclude it from the results of active cases. The field I am trying to use to check if a case has been resold is the status field.  I need help with a query that looks at all cases with the status of Active and removes cases whose status has now changed to Resolved from the results. Thank you.
Hi all! We are setting up a lab in AWS ECS where all workloads are deployed as Fargate Tasks. We successfuly deployed and configured an otel-collector (http://quay.io/signalfx/splunk-otel-collector:... See more...
Hi all! We are setting up a lab in AWS ECS where all workloads are deployed as Fargate Tasks. We successfuly deployed and configured an otel-collector (http://quay.io/signalfx/splunk-otel-collector:latest) as a fargate task, which is actually sending traces (seen at APM in splunk observability), and metrics, which can be seen in the metrics finder. Is there any way to get tasks/containers appear in  the Infrastructure navigator? Thanks!
@Karthikeya  if the "v-" prefix is not guaranteed, then your regex needs to be updated  
@kiran_panchavat always v is not guaranteed before fqdn as per user
Hi @gcusello , For some reason, the provided regex is not working. Can you please re check?    
Hi, We have Splunk Enterprise 9.3.1 that we use as a Heavy Forwarder that sends data to Splunk Cloud indexers using the UF credentials downloaded from the Splunk Cloud instance. After upgrading to 9... See more...
Hi, We have Splunk Enterprise 9.3.1 that we use as a Heavy Forwarder that sends data to Splunk Cloud indexers using the UF credentials downloaded from the Splunk Cloud instance. After upgrading to 9.4.0, we started getting TCPOutAutoLB-0 error messages in the HF. We tried installing a fresh 9.4.0 and 9.4.1 and just installing the UF certificate and we still get the error. Installing a fresh 9.3.1 with the same certificate does not have the error. Has any one experienced the same problem? How were you able to fix it? Regards, Edward
@Karthikeya  Check this    
Hi @Karthikeya , if it's runs it's correct! anyway I'd use this: | rex "vs_name\"\:\"\w\-(?<fqdn>[^-]+)-\d+"  Ciao. Giuseppe
I'm not very familiar with windows Splunk setup but it is highly unlikely that it's related to the Splunk installer. I'd hazard a guess that it's either a coincidence and at the same time - for examp... See more...
I'm not very familiar with windows Splunk setup but it is highly unlikely that it's related to the Splunk installer. I'd hazard a guess that it's either a coincidence and at the same time - for example - there was an update of GPO or you have some third-party solution (EDR of some sort?) which was triggered by the installation.
Hi, In Disk Management, the drive is showing as Healthy and the drive letter is assigned correctly. However, I'm still unable to access the drive — it continues to show the same error when I try to ... See more...
Hi, In Disk Management, the drive is showing as Healthy and the drive letter is assigned correctly. However, I'm still unable to access the drive — it continues to show the same error when I try to open it. I'm attaching the exact error message that appears when attempting to open the D:\ drive for your reference. I have also tried running the chkdsk /f /r command. It completed successfully, but unfortunately, the issue still persists. Would appreciate any further suggestions. Thanks!  
Regex Please tell me what will be the best and effective way to write regex here: "vs_name":"v-juniper-uat.opco.sony-443", Need to extract juniper-uat.opco.sony from every event as FQDN. I am wri... See more...
Regex Please tell me what will be the best and effective way to write regex here: "vs_name":"v-juniper-uat.opco.sony-443", Need to extract juniper-uat.opco.sony from every event as FQDN. I am writing the below regex and it worked. Please tell me is this good or any suggestions you give for more reliable? |rex "vs_name\"\:\"[^\/]\-(?<fqdn>[^\/]+)\-\d+\"\,"
Hello, I've recently encountered a problem with the severity level within the ARAs, my current severity level for this specific event is Medium but I want depending on the output of a specific field ... See more...
Hello, I've recently encountered a problem with the severity level within the ARAs, my current severity level for this specific event is Medium but I want depending on the output of a specific field to alter that severity level to high or even critical but I just couldn't find a way to alter it without creating a completely new correlation search that checks that specific field and is always on 
Hello @jkat54! I'm having some trouble getting the app to work, and the ultimate goal is to be able to change the ownership of searches automatically (e.g. from a scheduled report). Here is the sea... See more...
Hello @jkat54! I'm having some trouble getting the app to work, and the ultimate goal is to be able to change the ownership of searches automatically (e.g. from a scheduled report). Here is the search: ``` get all info about the searches on the instance ``` | rest /services/saved/searches splunk_server=local ``` exclude every search where are from user “user” , are disabled and they come only from app search ``` | search eai:acl.owner!="user2 " disabled = 0 eai:acl.app = "search" | rename eai:acl.owner as owner, eai:acl.app as app, eai:acl.sharing AS sharing ```extract the management port and the search name already urlencoded ``` | rex field=id "^\S+(?<mngmport>\:\d+)\/servicesNS\/\S+\/saved\/searches\/(?<search_name>\S+)$" ``` buid the uri for the curl mngmport =: mngmport ``` | eval url = https:// + splunk_server + mngmport +"/servicesNS/"+ owner +"/"+ app +"/saved/searches/"+ search_name +"/acl" ``` future use, not yet implemented ``` | eval description = description + " - moved from " + owner ``` constructing data= {"owner":"user2","sharing":"global"} ``` | eval data = json_object("owner", "user2", "sharing", sharing) ``` debug & Co ``` | table splunk_server app owner title description disabled action.notable cron_schedule url data id sharing * ``` the curl, which isn't working/ i'm probably doing something wrong here ``` | curl urifield=url method="post" splunkauth="true" debug=true datafield=data | table curl*   I've tried to specify the cert in some way, but it seems that there are no args that I can pass for it. Since I can't find a solution to this (searching online I found a suggestion to bypass ssl inspection, but in my case I don't think I can solve it with that), I'm here to ask for help. I prefer to avoid using simple authentication (user:password). The error I get is from the curl_message field: HTTPSConnectionPool(host='host', port=8089): Max retries exceeded with url: /servicesNS/user1/search/saved/searches/dummy%20search/acl (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1143)'))) curl_status: 408 Thanks in advance!
Hi. If your splunk version is 9.1 or higher, please refer to the case below. You can solve it by setting the option below in server.conf to false. > https://splunk.my.site.com/customer/s/article/P... See more...
Hi. If your splunk version is 9.1 or higher, please refer to the case below. You can solve it by setting the option below in server.conf to false. > https://splunk.my.site.com/customer/s/article/PreforkedSearchProcessException-can-t-launch-new-search-process-because-pool-is-full However, since the default setting is true, it is recommended to contact splunk support and decide. [general] enable_search_process_long_lifespan = false