All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @M4rv1m  Are you running on-prem or Splunk Cloud? This app actually uses Python requests under the hood with verify=True set - this means it is expecting a valid certificate based on the CAs it h... See more...
Hi @M4rv1m  Are you running on-prem or Splunk Cloud? This app actually uses Python requests under the hood with verify=True set - this means it is expecting a valid certificate based on the CAs it has access to. I believe you can overwrite the request CAs using an environment variable "REQUESTS_CA_BUNDLE" - this means you could possible set this in $SPLUNK_HOME/etc/splunk-launch.conf to the CA of your Splunk instance, eg: REQUESTS_CA_BUNDLE=/opt/splunk/etc/auth/cacert.pem  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, we do use Version 9.2.4. The behaviour is independent of the search complexity. It also doesn't change if I search internal logs or through several indexes. The index(es) will always be replace... See more...
Hi, we do use Version 9.2.4. The behaviour is independent of the search complexity. It also doesn't change if I search internal logs or through several indexes. The index(es) will always be replaced by an * The behaviour is also the same within the list view, so it's not only table view related.   BR
Hi You can dynamically set the severity of a notable event in a correlation search by using an eval statement to populate the severity field based on your specific field's value.   <your search> ... See more...
Hi You can dynamically set the severity of a notable event in a correlation search by using an eval statement to populate the severity field based on your specific field's value.   <your search> | eval severity=case( fieldX="value1", "high", fieldX="value2", "critical", true(), "medium" ) The severity field in the correlation search result determines the notable event's severity in ES. Use eval with case() to assign severity dynamically based on the value of fieldX. The last true(), "medium" acts as a default if no other condition matches. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Karthikeya  Does the following work well for you? This allows v- to be optional:   |makeresults | eval _raw="\"vs_name\":\"v-juniper-uat.opco.sony-443\"," | append [|makeresults | eval _ra... See more...
Hi @Karthikeya  Does the following work well for you? This allows v- to be optional:   |makeresults | eval _raw="\"vs_name\":\"v-juniper-uat.opco.sony-443\"," | append [|makeresults | eval _raw="\"vs_name\":\"juniper-uat.opco.sony-443\","] | rex field=_raw "vs_name\"\s*:\s*\"(?:v-)?(?<fqdn>.+)-\d+"    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ejose  Please could you share the errors that you are receiving. Can you also confirm the certificate has not expired? The reason I ask this specifically is that a Splunk forwarder will remain ... See more...
Hi @ejose  Please could you share the errors that you are receiving. Can you also confirm the certificate has not expired? The reason I ask this specifically is that a Splunk forwarder will remain connected to another Splunk server even after an SSL cert has expired if it cannot create a new connection. In other words, its possible the certificate had previously expired but you only experienced an issue once the existing connection was closed down and you upgraded.  openssl x509 -in <PathToYourCert> -noout -dates  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Splunkie  To exclude cases that have transitioned to "Resolved" and only show currently active cases, find the latest status per case and filter where that status is "Active". | your_search_her... See more...
Hi @Splunkie  To exclude cases that have transitioned to "Resolved" and only show currently active cases, find the latest status per case and filter where that status is "Active". | your_search_here | stats latest(status) as latest_status by case_id | where latest_status="Active" stats latest(status) by case_id groups events by case and finds the most recent status update per case where latest_status="Active" filters to only cases whose latest status is still "Active" This effectively excludes cases that have been resolved or closed later Replace case_id with your actual case identifier field Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Which version of Splunk are you running? For me when I click on a "New Search" in either table view or list view I get the same behaviour, which in my example did index=_internal (Which I had sea... See more...
Hi Which version of Splunk are you running? For me when I click on a "New Search" in either table view or list view I get the same behaviour, which in my example did index=_internal (Which I had searched) and added the field I clicked.  Does it differ if you have a more complex query?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello, today I have found a bug(?) in the "New Search" function from the Table view. What I do mean with the "New Search" function: Run a search and select the table view (not raw or list). Then c... See more...
Hello, today I have found a bug(?) in the "New Search" function from the Table view. What I do mean with the "New Search" function: Run a search and select the table view (not raw or list). Then click on one of the shown values of a field, for example the value of a host field, and select "New Search". Now a new search starts with the selected field+value, but instead of using the same index(es) from the view before, only a * will be used. As we do not have defined any default indexes in our environment those searches won't return any results, because no index is included ín the search. Is there a possibility how I can reconfigure this, instead of a plain asterisk?   Best Regards.
As I said - that sounds improbably but of course not impossible. Especially since it's windows I'd go for support case at this point (or just do a quick windows reinstall if you can afford it - m... See more...
As I said - that sounds improbably but of course not impossible. Especially since it's windows I'd go for support case at this point (or just do a quick windows reinstall if you can afford it - might be faster). BTW, did you verify the installer checksum before running it?
Hi, I understand your point, but I’d like to clarify a few things from my side. The issue with my D:\ drive started immediately after installing Splunk Enterprise — just before the installation, I h... See more...
Hi, I understand your point, but I’d like to clarify a few things from my side. The issue with my D:\ drive started immediately after installing Splunk Enterprise — just before the installation, I had accessed the drive without any problems. There are no EDR or third-party security solutions installed on my system, so that can be ruled out. Although the Splunk installation process appeared to complete, I can’t find it anywhere on the system — not in the Control Panel, not in the installation directory, and there’s no desktop shortcut either. It seems like something went wrong during the install, and I strongly suspect that’s what caused the issue with my drive. Appreciate your input, and I’m open to any suggestions that could help further.
Hi Friends, I am working a query that checks if the value of a field has changed to a state of resolved to exclude it from the results of active cases. The field I am trying to use to check if a ca... See more...
Hi Friends, I am working a query that checks if the value of a field has changed to a state of resolved to exclude it from the results of active cases. The field I am trying to use to check if a case has been resold is the status field.  I need help with a query that looks at all cases with the status of Active and removes cases whose status has now changed to Resolved from the results. Thank you.
Hi all! We are setting up a lab in AWS ECS where all workloads are deployed as Fargate Tasks. We successfuly deployed and configured an otel-collector (http://quay.io/signalfx/splunk-otel-collector:... See more...
Hi all! We are setting up a lab in AWS ECS where all workloads are deployed as Fargate Tasks. We successfuly deployed and configured an otel-collector (http://quay.io/signalfx/splunk-otel-collector:latest) as a fargate task, which is actually sending traces (seen at APM in splunk observability), and metrics, which can be seen in the metrics finder. Is there any way to get tasks/containers appear in  the Infrastructure navigator? Thanks!
@Karthikeya  if the "v-" prefix is not guaranteed, then your regex needs to be updated  
@kiran_panchavat always v is not guaranteed before fqdn as per user
Hi @gcusello , For some reason, the provided regex is not working. Can you please re check?    
Hi, We have Splunk Enterprise 9.3.1 that we use as a Heavy Forwarder that sends data to Splunk Cloud indexers using the UF credentials downloaded from the Splunk Cloud instance. After upgrading to 9... See more...
Hi, We have Splunk Enterprise 9.3.1 that we use as a Heavy Forwarder that sends data to Splunk Cloud indexers using the UF credentials downloaded from the Splunk Cloud instance. After upgrading to 9.4.0, we started getting TCPOutAutoLB-0 error messages in the HF. We tried installing a fresh 9.4.0 and 9.4.1 and just installing the UF certificate and we still get the error. Installing a fresh 9.3.1 with the same certificate does not have the error. Has any one experienced the same problem? How were you able to fix it? Regards, Edward
@Karthikeya  Check this    
Hi @Karthikeya , if it's runs it's correct! anyway I'd use this: | rex "vs_name\"\:\"\w\-(?<fqdn>[^-]+)-\d+"  Ciao. Giuseppe
I'm not very familiar with windows Splunk setup but it is highly unlikely that it's related to the Splunk installer. I'd hazard a guess that it's either a coincidence and at the same time - for examp... See more...
I'm not very familiar with windows Splunk setup but it is highly unlikely that it's related to the Splunk installer. I'd hazard a guess that it's either a coincidence and at the same time - for example - there was an update of GPO or you have some third-party solution (EDR of some sort?) which was triggered by the installation.
Hi, In Disk Management, the drive is showing as Healthy and the drive letter is assigned correctly. However, I'm still unable to access the drive — it continues to show the same error when I try to ... See more...
Hi, In Disk Management, the drive is showing as Healthy and the drive letter is assigned correctly. However, I'm still unable to access the drive — it continues to show the same error when I try to open it. I'm attaching the exact error message that appears when attempting to open the D:\ drive for your reference. I have also tried running the chkdsk /f /r command. It completed successfully, but unfortunately, the issue still persists. Would appreciate any further suggestions. Thanks!