All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I am trying to pull Akamai logs to Splunk. Hence installed this app in HF  - https://splunkbase.splunk.com/app/4310 and in data inputs given all the required fields (that provided my akamai) ... See more...
Hi all, I am trying to pull Akamai logs to Splunk. Hence installed this app in HF  - https://splunkbase.splunk.com/app/4310 and in data inputs given all the required fields (that provided my akamai) and when trying to save it the following error came - Encountered the following error while trying to save: HTTP 404 -- Action forbidden. What is the meaning of this error? is it issue from Akamai end or Splunk end? We have recently enabled our HF and this error is showing? Is this issue related to this error? Please help me to get rid of this issue and the error?  
We are using the following PowerShell script to monitor Azure AD authentication-enabled URLs in Splunk. However, when incorrect credentials are entered, a 200 response code is returned instead of the... See more...
We are using the following PowerShell script to monitor Azure AD authentication-enabled URLs in Splunk. However, when incorrect credentials are entered, a 200 response code is returned instead of the expected failure response (e.g., 401 Unauthorized). Has anyone encountered this issue? Please help us rectify this and ensure that incorrect credentials are flagged with the appropriate response code. # Prompt User for Credentials $credential = Get-Credential   # Define Target URL $targetUrl = "<TARGET_URL>"  # URL to monitor   # Convert Credentials to Base64 for Authorization Header $username = $credential.UserName $password = $credential.GetNetworkCredential().Password $authValue = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes("$username`:$password")) $headers = @{ Authorization = "Basic $authValue" }   # Send Request with Authorization try {     $response = Invoke-WebRequest -Uri $targetUrl -Headers $headers -Method Get -UseBasicParsing -ErrorAction Stop       # Check if the server actually challenges for authentication     if ($response.StatusCode -eq 200 -and $response.Headers["WWW-Authenticate"]) {         Write-Host "Authentication failed: Invalid credentials provided."     } else {         Write-Host "Response Code: $($response.StatusCode)"     } } catch {     if ($_.Exception.Response.StatusCode -eq 401) {         Write-Host "Authentication failed: Invalid credentials provided."     } else {         Write-Host "Request failed with error: $($_.Exception.Message)"     } }
Help: when i try to run the following a get Error in 'stats' command: The argument 'span=1min' is invalid. index=transactions tipo_transaccion="Retiro de Efectivo" (emisor="VISA" AND tipo_c... See more...
Help: when i try to run the following a get Error in 'stats' command: The argument 'span=1min' is invalid. index=transactions tipo_transaccion="Retiro de Efectivo" (emisor="VISA" AND tipo_cuenta="Crédito") | eval is_authorized=if(codigo_respuesta=="00" OR codigo_respuesta=="000", 1, 0) | eval is_declined=if(is_authorized==0 AND (codigo_respuesta!="91" AND codigo_respuesta!="68" AND codigo_respuesta!="timeout"), 1, 0) | eval is_timeout=if(codigo_respuesta=="91" OR codigo_respuesta=="68" OR codigo_respuesta=="timeout", 1, 0) | stats count as total_txn, sum(is_authorized) as authorized_txn, sum(is_declined) as declined_txn, sum(is_timeout) as timeout_txn, sum(eval(is_authorized*importe)) as authorized_amount, sum(eval(is_declined*importe)) as declined_amount, sum(eval(is_timeout*importe)) as timeout_amount by _time span="1min" Please your support I really don't know what is causing the mistake   regards Dtapia
We created Splunk Token and added in SingulrAI environment along with splunk endpoint deatils(Site URL and Splunk management port) to send logs. However, Singulr AI was unable to pick up Splunk logs ... See more...
We created Splunk Token and added in SingulrAI environment along with splunk endpoint deatils(Site URL and Splunk management port) to send logs. However, Singulr AI was unable to pick up Splunk logs due to connectivity or network timeout issues. Singulr AI support mentioned they are seeing connectivity / network timeout issues with the provided splunk domain + port from the Singulr collector (deployed in our organization's environment). What is the reason?
Hi All, I have a lookup that contains set of email ids and associated accounts. Example :  Account ID OWNER_EMAIL 34234234 test1@gmail.com; test2@gmail.com 123234234 tes... See more...
Hi All, I have a lookup that contains set of email ids and associated accounts. Example :  Account ID OWNER_EMAIL 34234234 test1@gmail.com; test2@gmail.com 123234234 test3@gmail.com;test4@gmail.com <logic> | eval email_list = split(OWNER_EMAIL, ";") | stats values(email_list) as email_list values(ENVIRONMENT) as ENVIRONMENT values(category) as EVENT_CATEGORY values(EVENT_TYPE) as EVENT_TYPE values(REGION) as Region values(AFFECTED_RESOURCE_ARNS) as AFFECTED_RESOURCE_ARNS. I have configured $result.email_list$ in alert action - email.to setting. Email is getting sent successfully but all of the result together is sent to email recepient. Result : Account ID  Email_list Environment Category Type Region Arns Description 34234234 test1@gmail.com; test2@gmail.com Development test_cat1 Event1 global testarn1 testdescr1 123234234 test3@gmail.com;test4@gmail.com Production test_cat2 Event2 global testarn2 testdescr2 When alert is triggered, separate email should go to test1@gmail.com; test2@gmail.com with both of them in to field  with email body containing only first row and another email should go to test3@gmail.com;test4@gmail.com with  both of them in to field with email body containing only second row. Please help how to achieve this. Regards, PNV
                                                    I want to route data I want to split one sourcetype into two. When I click Extract New Fields, it says The ev... See more...
                                                    I want to route data I want to split one sourcetype into two. When I click Extract New Fields, it says The events associated with this job have no sourcetype information. I don't know where the data is being stored incorrectly
Hello, Can Security Essentials import security advisories from vendors like Broadcom or Microsoft? I would like to compare those to our inventory and raise alerts if anything is affected by a secur... See more...
Hello, Can Security Essentials import security advisories from vendors like Broadcom or Microsoft? I would like to compare those to our inventory and raise alerts if anything is affected by a security advisory. Example:  import VMSA from Broadcom and compare against ESX, VM and vmTools that report into splunk.   Cheers Andre
I'm experiencing an issue with the Splunk DB Connect app under Data Inputs > Choose Table where the Schema dropdown fails to populate. The Connection status shows as healthy and connected. When I us... See more...
I'm experiencing an issue with the Splunk DB Connect app under Data Inputs > Choose Table where the Schema dropdown fails to populate. The Connection status shows as healthy and connected. When I use Preview Data and run a SQL query like: SELECT * FROM ALL_USERS; It successfully returns data, indicating that the schema can be queried manually. To further test this, I created a simple Java collector that fetches schemas using the same JDBC driver: java -cp .:tibero6-jdbc-14.jar TiberoGetSchemas This custom collector successfully retrieves the schema list, confirming that the user has access. The user account has been granted the necessary privileges, including access to ALL_USERS, dictionary views, and the SELECT_CATALOG_ROLE, in collaboration with our Tibero DBA. However, in Splunk DB Connect, the Schema dropdown remains empty and unselectable. Has anyone encountered a similar issue with Tibero and Splunk DB Connect? Any suggestions would be greatly appreciated. - db : Tibero 6 - Splunk db connect version : 3.2.0
We have a total of five search heads, and while four of them are successfully executing the curl command, one search head is encountering an SSL error, specifically a SSLError with a curl status of 4... See more...
We have a total of five search heads, and while four of them are successfully executing the curl command, one search head is encountering an SSL error, specifically a SSLError with a curl status of 408.  HTTPSConnectionPool(host='localhost', port=8801): Max retries exceeded with url: /servicesNS/nobody/alert/saved/searches/alert/acl (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1106)'))) what is the  next steps to identify and resolve the root cause of this SSL error. 
I've done a bit of searching and haven't quite found a solution to what I'm trying to accomplish (or I haven't understood the previous solutions). But essentially I'm trying to write an SPL query (f... See more...
I've done a bit of searching and haven't quite found a solution to what I'm trying to accomplish (or I haven't understood the previous solutions). But essentially I'm trying to write an SPL query (for use in a dashboard) that will append a string (domain) to a list of values (hosts) passed by a token prior to processing the search. For example, if the value passed by token $DeviceNames$ is "host1,host2,host3", the goal would be to return results as if the query was equivalent to: hostname IN (host1.domain.com,host2.domain.com,host3.domain.com)  
Simple search but Im having issues nailing down what I want to see. This search returns all the views the logged in user owns. | rest splunk_server=local /servicesNS/-/-/data/ui/views | rename auth... See more...
Simple search but Im having issues nailing down what I want to see. This search returns all the views the logged in user owns. | rest splunk_server=local /servicesNS/-/-/data/ui/views | rename author as user| search [| rest /services/authentication/current-context splunk_server=local| fields + username | rename username as user] | rename eai:acl.app as App, eai:acl.perms.read as Permissions, title as View, label AS Dashboard | table Dashboard I would like to have it show all the views the logged in user has access to instead, not just the ones that are owned. Thanks for the help
HI Team  Can someone please help me to find how we can fetch the status of the application A1 having 5 jobs (Job1 , Job2 , Job3 , Job4 , Job5) running every day.  Status of Application : This needs... See more...
HI Team  Can someone please help me to find how we can fetch the status of the application A1 having 5 jobs (Job1 , Job2 , Job3 , Job4 , Job5) running every day.  Status of Application : This needs to be extracted using the query attached below:  Planned : If current time is less than the expected time of JOB1  OK-Running :  If Current time is between the expected time of JOB1 and expected time of JOB5 + Status of all the JOBs is either OK  or PLANNED KO-FAILED : if Current time is between the expected time of JOB1 and expected time of JOB5 + Status of any the 1 JOBs is either KO.  Query used today to fetch the status of each job in the application:  index = ABC ( TERM(JOB1) OR TERM(JOB4) OR TERM(JOB2) OR TERM(JOB3) OR TERM(JOB5) OR TERM(JOB6) OR TERM(JOB7) ) ("- ENDED" OR "- STARTED" OR "ENDED - ABEND") | eval Function = case(like(TEXT, "%ENDED - ABEND%"), "ABEND" , like(TEXT, "%ENDED - TIME%"), "ENDED" , like(TEXT, "%STARTED - TIME%"), "STARTED") | eval DAT = strftime(relative_time(_time, "+0h"), "%d/%m/%Y") , {Function}_TIME=_time | rename DAT as Date_of_reception | stats max(Date_of_reception) as Date_of_reception max(ENDED_TIME) as ENDED_TIME max(STARTED_TIME) as STARTED_TIME max(ABEND_TIME) as ABEND_TIME by JOBNAME | inputlookup append=t ESES_Job_MIFID_PPE.csv | stats values(*) as * by JOBNAME | eval DAY_OF_WEEK = strftime(strptime(Date_of_reception, "%d/%m/%Y"), "%A") ,today = strftime(1743030000, "%Y-%m-%d") , TO_DAY = strftime(strptime(today, "%Y-%m-%d"), "%A") , Diff=ENDED_TIME-STARTED_TIME | rename STARTED_TIME as START_TIME1 , ENDED_TIME as END_TIME1 , ABEND_TIME as ABEND_TIME1 | eval diff_time = tostring(Diff , "duration"), diff_time_1=substr(diff_time,1,8) , START_TIME = Date_of_reception." ".strftime((START_TIME1),"%H:%M:%S") , END_TIME = Date_of_reception." ".strftime((END_TIME1),"%H:%M:%S") , END_TIME2 = strftime((END_TIME1),"%H:%M:%S") , ABEND_TIME = Date_of_reception." ".strftime((ABEND_TIME1),"%H:%M:%S") , ABEND_TIME2 = strftime((ABEND_TIME1),"%H:%M:%S") , EXPECTED_TIME = exp_time , DEADLINE_TIME = high_dl2 ```EXPECTED_TIME_run = Date_of_reception." ".EXPECTED_TIME, EXPECTED_TIME_run = strptime(EXPECTED_TIME_run, "%d/%m/%Y %H:%M:%S") , TimeDiff=EXPECTED_TIME_run-now() , EXP_TIME_norun = if (TO_DAY = "Friday" , exp_time2 , exp_time1) ,EXPECTED_TIME_norun = today + " " + EXP_TIME_norun, EXPECTED_TIME_norun = strptime(EXPECTED_TIME_norun, "%Y-%m-%d %H:%M:%S") , TimeDiff_norun =EXPECTED_TIME_norun-now() , Time_Diff=now() - strptime(START_TIME, "%d/%m/%Y %H:%M:%S") ``` | eval STATUS = if(isnotnull(END_TIME2) AND (END_TIME2 <= ABEND_TIME2),"ABEND", if(isnotnull(END_TIME2) AND (END_TIME2 <= DEADLINE_TIME),"OK", if(isnotnull(END_TIME2) AND (END_TIME2 > DEADLINE_TIME),"BREACHED", if(isnull(END_TIME2) AND isnull(START_TIME1) AND (TimeDiff_norun > 300),"PLANNED", if(isnull(END_TIME2) AND isnull(START_TIME1) AND isnull(TimeDiff) AND (TimeDiff_norun < -600) AND (TimeDiff_norun >= -1800),"JOB NOT STARTED YET", if(isnull(END_TIME2) AND isnull(START_TIME1) AND isnull(TimeDiff) AND (TimeDiff_norun < -1800),"JOB DID NOT EXECUTED", if(isnull(END_TIME2) AND isnotnull(START_TIME1) AND (Time_Diff>600),"FAILED", if(isnull(END_TIME2) AND isnotnull(START_TIME1) and (TimeDiff<=600),"RUNNING", if( isnull(END_TIME2) AND isnull(START_TIME1) AND JOBNAME IN ("$JOB3" ) , "OK-Interest file is received" , if( isnull(END_TIME2) AND isnull(START_TIME1) AND JOBNAME IN ("$JOB6") , "OK-Mifid 2 file is received" , if( isnotnull(END_TIME2) AND isnotnull(START_TIME1) AND JOBNAME IN ("$JOB3" ) , "KO-Interest file Not received" , if( isnotnull(END_TIME2) AND isnotnull(START_TIME1) AND JOBNAME IN ("$JOB6") , "KO-Mifid 2 file Not received" , "WARNING")))))))))))) | rename diff_time_1 as EXECUTION_TIME | sort Order | table Application,JOBNAME,Description, EXPECTED_TIME , DEADLINE_TIME , START_TIME , END_TIME ,EXECUTION_TIME, STATUS | fillnull value="-"  
I have installed splunk dbx forwarder in 1 of my VM. Now when I am trying to create connection with MongoDB, I am getting this error (Our MongoDB uses certs and key for authentication and not user... See more...
I have installed splunk dbx forwarder in 1 of my VM. Now when I am trying to create connection with MongoDB, I am getting this error (Our MongoDB uses certs and key for authentication and not username and password): No suitable driver found for jdbc:mongo://<host>:<port>/?authMechanism=MONGODB-X509&authSource=$external&tls=true&tlsCertificateKeyFile=<path to cert key pair>&tlsCAFile=<path to ca cert> Diagnosis: No compatible drivers were found in the 'drivers' directory. Possible resolution: Copy the appropriate JDBC driver for the database you are connecting to in the 'drivers' directory.   Splunk DBX Add-on for MongoDB : 1.2.0 List of Mongo drivers tried:  mongodb-driver-core-4.10.2.jar  mongojdbc4.8.3.jar  splunk-mongodb-jdbc-1.2.0.jar mongodb-driver-sync-4.10.2.jar  ojdbc8.jar          UnityJDBC_Trial_Install.jar mongodb-jdbc-2.2.2-all.jar   mongo-java-driver-3.12.14.jar   mongodb-driver-core-5.2.1.jar mongodb-driver-sync-5.2.1.jar But getting the same version each time. Splunk_dbx forwarder version:  Splunk 6.4.0 Mongo db version : 7.0.14 ---------------------------------------------------------- This is the db_connection_types.conf: [mongo] displayName = MongoDB jdbcDriverClass = com.mongodb.jdbc.MongoDriver ServiceClass = com.mongodb.jdbc.MongoDriveri jdbcUrlFormat = jdbc:mongo://<host:port>,<host:port>,<host:port>/?authMechanism=MONGODB-X509&authSource=$external&tls=true&tlsCAFile=<path to ca file>&tlsCertificateKeyFile=<path to cert and key file> useConnectionPool = false port = 10924 ssl = true sslMode = requireSSL sslCertificatePath = <path to file> sslCertificateKeyPath = <path to file> sslAllowInvalidHostnames = false authSource = $external tlsCipherSuite = "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"    
Hi Team, How to combine multiple data input into one, basically I am having 5 different data inputs where I am taking same data from User. How to combine all data input into one data input. I want ... See more...
Hi Team, How to combine multiple data input into one, basically I am having 5 different data inputs where I am taking same data from User. How to combine all data input into one data input. I want One data input where I will internally run 2 different data type with different polling interval. Is this possible with python SDK and How?     Different polling intervals for “performance” and “inventory” data
I have a splunk clustered environment, where the License Manager has a none existent(cannot be resolved/name-lookup) servername configured (etc/system/local/server.conf -> serverName). This has been ... See more...
I have a splunk clustered environment, where the License Manager has a none existent(cannot be resolved/name-lookup) servername configured (etc/system/local/server.conf -> serverName). This has been running like this for a some time. But this is introducing issues with license monitoring in Montioring Console. To eliminate this issue and make this Splunk instance to comply to other existing instances, i tried to simply change the serverName in server.conf to the hostname and restarting the Splunk service. Splunk service is starting without complains, but the Monitoring Console reports that suddenly all the SearchHeads are unreachable. Querying the Searchheads for shcluster-status, results in errors. Reverting back to the old name and restarting, fixes that SearchHead unreachable issue and status. This License Manager server has following roles:  * License manager * (Monitoring Console) * Manager Node I do not see any connection on why this change is affecting Searchheads. Indexers are fine. Deployer is a different server. I found documented issues (for this kind of change) for Indexers and the Monitoring Console itself or that it can have side affects for the Deployment Server, but no real hit on Searchheads/SHC. As i do not have permanent access to this instance. I have to prepare kind of a remediation plan or at least analysis. I'm searching for hints where I can start with my investigation. Maybe someone had successfully changed a License Master name. Hoping that I'm missing something obvious. Thanks
Fields value of 2nd and 3rd events are enter changing. please suggest how to maintain order in Splunk status command. I can't use any other fields in stats by clause than token_id.   Sample Event: ... See more...
Fields value of 2nd and 3rd events are enter changing. please suggest how to maintain order in Splunk status command. I can't use any other fields in stats by clause than token_id.   Sample Event: |makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2931"|eval key=2931|eval keyword="DK-BAL-AP-00613" |append [| makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2932"|eval key=2932|eval keyword="DK-Z13-SW-00002"] |append [| makeresults |eval token_id="c75136c4-bdbc-439b"|eval doc_no="GSSAGGOS_QA-2933"|eval key=2933|eval keyword="DK-BAL-AP-00847"] | stats values(key) as key values(keyword) as keyword values(doc_no) as doc_no by token_id | eval row=mvrange(0,mvcount(doc_no))| mvexpand row| foreach doc_no keyword key [| eval <<FIELD>>=mvindex(<<FIELD>>,row)]|fields - row Search Result output toke_id key keyword doc_no c75136c4-bdbc-439b 2931 DK-BAL-AP-00613 GSSAGGOS_QA-2931 c75136c4-bdbc-439b 2932 DK-BAL-AP-00847 GSSAGGOS_QA-2932 c75136c4-bdbc-439b 2933 DK-Z13-SW-00002 GSSAGGOS_QA-2933         Expected Output toke_id key keyword doc_no c75136c4-bdbc-439b 2931 DK-BAL-AP-00613 GSSAGGOS_QA-2931 c75136c4-bdbc-439b 2932 DK-Z13-SW-00002 GSSAGGOS_QA-2932 c75136c4-bdbc-439b 2933 DK-BAL-AP-00847 GSSAGGOS_QA-2933
Hello, I want to configure alert when queue is full. We have Max Que depth and current queue depth metrics.  Problem is there are 100 queues, and each queue is having different max value so I can't... See more...
Hello, I want to configure alert when queue is full. We have Max Que depth and current queue depth metrics.  Problem is there are 100 queues, and each queue is having different max value so I can't use * for calculating %. I don't want 100 health rules. * Is not allowed in metric expression. Is there any way to setup such alert? AppDynamics   
Dear Splunkers!! I am facing an issue with Splunk file monitoring configuration. When I define the complete absolute path in the inputs.conf file, Splunk successfully monitors the files. Below are... See more...
Dear Splunkers!! I am facing an issue with Splunk file monitoring configuration. When I define the complete absolute path in the inputs.conf file, Splunk successfully monitors the files. Below are two examples of working stanza configurations: Working Configurations: [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\0000000002783979-2025-03-27T07-39-33-128Z-SZC.VIT.BaptoEvents.50301.csv] [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\0000000002783446-2025-03-27T05-09-20-566Z-SZC.VIT.BaptoEvents.50296.csv] However, since more than 200 files are generated, specifying absolute paths for each file is not feasible. To automate this, I attempted to use a wildcard pattern in the stanza, as shown below: Non-Working Configuration: [monitor://E:\var\log\Bapto\BaptoEventsLog\SZC\*.csv] Unfortunately, this approach does not ingest any files into Splunk. I would appreciate your guidance on resolving this issue. Looking forward to your insights.
Hi team, i have a index with 4 sourcetype.  index has searchable retention of 4 months. is there any way we can keep same retention for 3 sourcetype and 1sourcetype can be increased to 8 months... See more...
Hi team, i have a index with 4 sourcetype.  index has searchable retention of 4 months. is there any way we can keep same retention for 3 sourcetype and 1sourcetype can be increased to 8 months ? For example: Index=xyz sourcetype = 1 searchable retention 4 Months   sourcetype = 2 searchable retention 4 Months   sourcetype = 3 searchable retention 4 Months   sourcetype = 4 searchable retention 8 Months
Hi, I just want to input OpenCTI feed from OpenCTI to Splunk. I followed installation instruction. https://splunkbase.splunk.com/app/7485 But, there is an error in _internal index as follows.... See more...
Hi, I just want to input OpenCTI feed from OpenCTI to Splunk. I followed installation instruction. https://splunkbase.splunk.com/app/7485 But, there is an error in _internal index as follows. 2025-03-27 16:50:02,889 ERROR pid=17581 tid=MainThread file=base_modinput.py:log_error:309 | Error in ListenStream loop, exit, reason: HTTPSConnectionPool(host='192.168.0.15', port=8080): Max retries exceeded with url: /stream/2cfe507d-1345-402d-82c7-eb8939228bf0?recover=2025-03-27T07:50:02Z (Caused by SSLError(SSLError(1, '[SSL: UNKNOWN_PROTOCOL] unknown protocol (_ssl.c:1106)'))) And I was able to access OpenCTI feeds using curl in Splunk enfironement and browser as well, but I can't access the OpenCTI stream using StreamID from Splunk to fetch the data. I think SSL is one of the issues. Please tell me if you know how to fetch the OpenCTI data to Splunk