All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We have a architecture of 3 site multi cluster which contains 6 indexers (2 in each site), 3 search heads (one in each), 1 Deployment server, 2 CMs(active and standby), Deployer, and these instances ... See more...
We have a architecture of 3 site multi cluster which contains 6 indexers (2 in each site), 3 search heads (one in each), 1 Deployment server, 2 CMs(active and standby), Deployer, and these instances residing in AWS cloud and we are in Splunk enterprise 9.1.4 Our requirement is AWS logs will be pushed to our Splunk environment. So they are pushing Cloudwatch logs from Kinesis data streams and Lambda configured (no idea on all these). But they asked our team to create HEC token and give token details and endpoint details to them in order to push logs to Splunk. They don't want Amazon firehouse add-on here. They prefer only through HEC. Now my doubt is where to configure this HEC token? I want all my 6 indexers to be load balances data receiving from AWS (considering it will be huge data). I have gone through HEC docs but it is unclear. Can someone help me with the step by step procedure how can I acheive it? Like where to create HEC token and all these? Thank you.
Hi, I have a question on Netskope onboarding to Splunk.   I installed to TA-NetSkopeAppForSplunk (4.1.0) on Splunk cloud and configured the API tokens provided by Netskope, and logs are flowing. ... See more...
Hi, I have a question on Netskope onboarding to Splunk.   I installed to TA-NetSkopeAppForSplunk (4.1.0) on Splunk cloud and configured the API tokens provided by Netskope, and logs are flowing.   However, the same add-on and tokens are configured on Splunk Enterprise (Intermediate Heavy Forwarder), and logs are not arriving. I tried using multiple local Splunk Enterprise instances for testing, and no logs. Any recommendations on what could be the issue with the Enterprise version while it is working fine on Cloud?
Expert advice needed. I was able to ingest cloudwatch logs for ecs and lambda with data manager Now i need to add tags like env= service= custom= to enrich logs Same was done for metrics with otel... See more...
Expert advice needed. I was able to ingest cloudwatch logs for ecs and lambda with data manager Now i need to add tags like env= service= custom= to enrich logs Same was done for metrics with otel collector flags and UF For logs ingested with DM can i add aws resource tag to cloudwatch loggroup i'm ingesting and expect this tag (key-value pair) to be added to logs Another possible solution could be to use splunk log driver directly from ecs instead of cloudwatch. Then according to documentation with env flag of splunk log driver I should be able to add some container env to log message Same question for the lambdas. But if only cloudwatch loggroup aws resource tags from the loggroup are able to be attached to ingested message. Any suggestions?
We have a service for a location 102. we preface entities that correlate with that service with a 102 in their entity name for example a location 102 entity can be name "102AP_M1" for an AP, the numb... See more...
We have a service for a location 102. we preface entities that correlate with that service with a 102 in their entity name for example a location 102 entity can be name "102AP_M1" for an AP, the number before the device type is the location "102" in this instance. We use the aliases entity_name and name to map entities to this alias. Due to our bad naming conventions we have another entity named "100AP_M102" that is showing up as an entity mapped to service 102. I put in an alias of "name NOT 100AP_M102" but this didnt remove the entity from this service. I tried similar aliases but no luck.    We use a base search to identify these APs and dont want to remove this base search because there are other dependencies. Any ideas on how to get this AP off this service?
Hello team, I know I can use stats instead of join.  For our purposes we sometimes do that with 2 different indexes. Now we have a one huge index from which we took some fields and we now have "dat... See more...
Hello team, I know I can use stats instead of join.  For our purposes we sometimes do that with 2 different indexes. Now we have a one huge index from which we took some fields and we now have "data model" which i can query using tstats.  Problem is when I need to join result data from tstats with results from another index.  Is this possible? I have following query (psedo query): index=abc fieldX IN (Mary John Bob) OR | tstats values(a) values(b) where fieldY=xy by _time span=1s | stats values(somevalue) as SomeA, dc(index) as idx, values(fieldX) as X by CommonName  
Hello, today I have found a bug(?) in the "New Search" function from the Table view. What I do mean with the "New Search" function: Run a search and select the table view (not raw or list). Then c... See more...
Hello, today I have found a bug(?) in the "New Search" function from the Table view. What I do mean with the "New Search" function: Run a search and select the table view (not raw or list). Then click on one of the shown values of a field, for example the value of a host field, and select "New Search". Now a new search starts with the selected field+value, but instead of using the same index(es) from the view before, only a * will be used. As we do not have defined any default indexes in our environment those searches won't return any results, because no index is included ín the search. Is there a possibility how I can reconfigure this, instead of a plain asterisk?   Best Regards.
Hi Friends, I am working a query that checks if the value of a field has changed to a state of resolved to exclude it from the results of active cases. The field I am trying to use to check if a ca... See more...
Hi Friends, I am working a query that checks if the value of a field has changed to a state of resolved to exclude it from the results of active cases. The field I am trying to use to check if a case has been resold is the status field.  I need help with a query that looks at all cases with the status of Active and removes cases whose status has now changed to Resolved from the results. Thank you.
Hi all! We are setting up a lab in AWS ECS where all workloads are deployed as Fargate Tasks. We successfuly deployed and configured an otel-collector (http://quay.io/signalfx/splunk-otel-collector:... See more...
Hi all! We are setting up a lab in AWS ECS where all workloads are deployed as Fargate Tasks. We successfuly deployed and configured an otel-collector (http://quay.io/signalfx/splunk-otel-collector:latest) as a fargate task, which is actually sending traces (seen at APM in splunk observability), and metrics, which can be seen in the metrics finder. Is there any way to get tasks/containers appear in  the Infrastructure navigator? Thanks!
Hi, We have Splunk Enterprise 9.3.1 that we use as a Heavy Forwarder that sends data to Splunk Cloud indexers using the UF credentials downloaded from the Splunk Cloud instance. After upgrading to 9... See more...
Hi, We have Splunk Enterprise 9.3.1 that we use as a Heavy Forwarder that sends data to Splunk Cloud indexers using the UF credentials downloaded from the Splunk Cloud instance. After upgrading to 9.4.0, we started getting TCPOutAutoLB-0 error messages in the HF. We tried installing a fresh 9.4.0 and 9.4.1 and just installing the UF certificate and we still get the error. Installing a fresh 9.3.1 with the same certificate does not have the error. Has any one experienced the same problem? How were you able to fix it? Regards, Edward
Regex Please tell me what will be the best and effective way to write regex here: "vs_name":"v-juniper-uat.opco.sony-443", Need to extract juniper-uat.opco.sony from every event as FQDN. I am wri... See more...
Regex Please tell me what will be the best and effective way to write regex here: "vs_name":"v-juniper-uat.opco.sony-443", Need to extract juniper-uat.opco.sony from every event as FQDN. I am writing the below regex and it worked. Please tell me is this good or any suggestions you give for more reliable? |rex "vs_name\"\:\"[^\/]\-(?<fqdn>[^\/]+)\-\d+\"\,"
Hello, I've recently encountered a problem with the severity level within the ARAs, my current severity level for this specific event is Medium but I want depending on the output of a specific field ... See more...
Hello, I've recently encountered a problem with the severity level within the ARAs, my current severity level for this specific event is Medium but I want depending on the output of a specific field to alter that severity level to high or even critical but I just couldn't find a way to alter it without creating a completely new correlation search that checks that specific field and is always on 
Hello @jkat54! I'm having some trouble getting the app to work, and the ultimate goal is to be able to change the ownership of searches automatically (e.g. from a scheduled report). Here is the sea... See more...
Hello @jkat54! I'm having some trouble getting the app to work, and the ultimate goal is to be able to change the ownership of searches automatically (e.g. from a scheduled report). Here is the search: ``` get all info about the searches on the instance ``` | rest /services/saved/searches splunk_server=local ``` exclude every search where are from user “user” , are disabled and they come only from app search ``` | search eai:acl.owner!="user2 " disabled = 0 eai:acl.app = "search" | rename eai:acl.owner as owner, eai:acl.app as app, eai:acl.sharing AS sharing ```extract the management port and the search name already urlencoded ``` | rex field=id "^\S+(?<mngmport>\:\d+)\/servicesNS\/\S+\/saved\/searches\/(?<search_name>\S+)$" ``` buid the uri for the curl mngmport =: mngmport ``` | eval url = https:// + splunk_server + mngmport +"/servicesNS/"+ owner +"/"+ app +"/saved/searches/"+ search_name +"/acl" ``` future use, not yet implemented ``` | eval description = description + " - moved from " + owner ``` constructing data= {"owner":"user2","sharing":"global"} ``` | eval data = json_object("owner", "user2", "sharing", sharing) ``` debug & Co ``` | table splunk_server app owner title description disabled action.notable cron_schedule url data id sharing * ``` the curl, which isn't working/ i'm probably doing something wrong here ``` | curl urifield=url method="post" splunkauth="true" debug=true datafield=data | table curl*   I've tried to specify the cert in some way, but it seems that there are no args that I can pass for it. Since I can't find a solution to this (searching online I found a suggestion to bypass ssl inspection, but in my case I don't think I can solve it with that), I'm here to ask for help. I prefer to avoid using simple authentication (user:password). The error I get is from the curl_message field: HTTPSConnectionPool(host='host', port=8089): Max retries exceeded with url: /servicesNS/user1/search/saved/searches/dummy%20search/acl (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1143)'))) curl_status: 408 Thanks in advance!
Hi, I am a splunk admin and we are re-assigning the orphaned knowledge object to my name as a temporary solution. I need to create a service account so that I can assign the orphaned knowledge objec... See more...
Hi, I am a splunk admin and we are re-assigning the orphaned knowledge object to my name as a temporary solution. I need to create a service account so that I can assign the orphaned knowledge objects to that account. I am doing it for the first time. Could some one please  specify what roles and capacities I should assign. Also is it the same process to create a service account same as how we create a local user in splunk like Settings > Users > Create User   ps. I am on splunk cloud | version: 9.3
Im using Java 8 java version "1.8.0_202" Java(TM) SE Runtime Environment (build 1.8.0_202-b08) Java HotSpot(TM) Client VM (build 25.202-b08, mixed mode)   Downloaded DB agent from the Control... See more...
Im using Java 8 java version "1.8.0_202" Java(TM) SE Runtime Environment (build 1.8.0_202-b08) Java HotSpot(TM) Client VM (build 25.202-b08, mixed mode)   Downloaded DB agent from the Controller and extraced, while starting the DB agent it throws exception: Could not connect to the controller/invalid response from controller   Error Message ] ControllerHttpRequestResponse:25 - Fatal transport error while connecting to URL [/controller/instance/UNKNOWN_MACHINE_ID/systemagentregistration]: org.apache.http.NoHttpResponseException: grace202504080038013.saas.appdynamics.com:443 failed to respond 10 Apr 2025 10:10:52,716 WARN [DBAgent-1] RegistrationChannel:128 - Could not connect to the controller/invalid response from controller, cannot get registration information
I would like the first letters of my name to be capitalised
Hi Splunk Community, I recently attempted to install Splunk Enterprise on my Windows 11 local machine using the .msi installer. During the installation: I checked the box to create a desktop shor... See more...
Hi Splunk Community, I recently attempted to install Splunk Enterprise on my Windows 11 local machine using the .msi installer. During the installation: I checked the box to create a desktop shortcut, but after the installation completed, the shortcut did not appear. I also changed the default installation directory from C:\ to my D:\ drive. After the installation, I noticed that my entire D drive became inaccessible, and I’m now getting the following error:             Location is not available             D:\ is not accessible. I'm unsure what went wrong during the installation. Not only did the shortcut not appear, but now I can't even access my D drive. Has anyone else experienced this issue? Could this be due to a permission error, drive formatting, or something caused by the installer? Any guidance on how I can fix or recover my D drive and properly install Splunk would be greatly appreciated. Thanks in advance!
I created a KV Store lookup using the "Splunk App for Lookup File Editing" app, however when I look at Settings>Lookups, the lookup definition doesn't show up.  In addition, when running | inputlook... See more...
I created a KV Store lookup using the "Splunk App for Lookup File Editing" app, however when I look at Settings>Lookups, the lookup definition doesn't show up.  In addition, when running | inputlookup <name> I get the error "The lookup table '<name>' requires a .csv or KV store lookup definition"   What do I miss? 
How do I show details of individual records in a count total? I have a query that counts events, and then returns the total count when it's above a specified threshold. How do I display the individua... See more...
How do I show details of individual records in a count total? I have a query that counts events, and then returns the total count when it's above a specified threshold. How do I display the individual events that constitute that count total? But only for those totals where the count exceeds the threshold?
I have multiline events where it is required to capture the error messages. The events are separated by "FAILED". I need to capture "Host key verification failed" from the first event, "scp: /logs/... See more...
I have multiline events where it is required to capture the error messages. The events are separated by "FAILED". I need to capture "Host key verification failed" from the first event, "scp: /logs/rsyslog/server02/: Not a directory" from the second event. The events: FAILED to copy checksum for: /logs/archives/archived-logs/server01.log.gz Host key verification failed. lost connection FAILED to copy checksum for: /logs/archives/archived-logs/server02.log.gz You are attempting to access a system owned by XYZ Provide proper credentials for access Contact the system administrator for assistance ---This system is monitored--- Details as follows. scp: /logs/rsyslog/server02/: Not a directory   I can capture the first message with: FAILED.+\:\s(?<LogFile>.+)(\n)(?<Message>.+(\n).+) I don't know how to skip to capture the last line of the second event for the Message field. Any help is most appreciated. Thank you  
Would like to configure an alert that will trigger based on the action and subcategory below.  Would like this to run hourly to check if there are any hits daily. action=update subcategory=WEB_DLP_P... See more...
Would like to configure an alert that will trigger based on the action and subcategory below.  Would like this to run hourly to check if there are any hits daily. action=update subcategory=WEB_DLP_POLICY