All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello everyone, I have found posts over the last 10 years with a specific error/bug(?). The src and dest IP addresses are swapped for the Cisco ASA event with ID 302013. If you look in the app,... See more...
Hello everyone, I have found posts over the last 10 years with a specific error/bug(?). The src and dest IP addresses are swapped for the Cisco ASA event with ID 302013. If you look in the app, it even points out that these two fields are knowingly swapped. However, for the following TearDown event of the same connection, the IPs are not swapped. I am trying to figure out why this is the case. Since this postings about this topic has been around for 10 years now and the app says: "# direction is outbound - source and destination fields are swapped" ... it can't be an error. But I can't explain it. Can anyone comment on this? Example: <166>Dec 23 2024 10:36:04: %ASA-6-302013: Built outbound TCP connection 224811914 for dmz-sample-uidoc_172.27.252.0/27_604:172.27.252.1/8200 (172.27.252.1/8200) to fwr_sample_172.20.25.0/26:172.27.13.131/62388 (172.27.13.131/62388) Result: src=172.27.13.131 || dest = 172.27.252.1 <166>Dec 23 2024 10:36:04: %ASA-6-302014: Teardown TCP connection 224811914 for dmz-sample-uidoc_172.27.252.0/27_604:172.27.252.1/8200 to fwr_sample_172.20.25.0/26:172.27.13.131/62388 duration 0:00:00 bytes 0 TCP FINs from fwr_sample_172.20.25.0/26 Result: src=172.27.252.1 || dest = 172.27.13.131 Thanks and best regards Jan
We are currently trying to integrate Zoom logs using Splunk Connect for Zoom. We have a Load Balancer (LB) in front of a Heavy Forwarder (HF) in our configuration, but the URL validation is failing ... See more...
We are currently trying to integrate Zoom logs using Splunk Connect for Zoom. We have a Load Balancer (LB) in front of a Heavy Forwarder (HF) in our configuration, but the URL validation is failing when configuring the Zoom App webhook. I could not find any reference to load balancer (LB) configuration in the documentation. Therefore, we would like to confirm whether Splunk Connect for Zoom supports configuration via an LB If so, please let us know if there are any additional settings required for LB or HF.
Hello, I am getting an error message "Sorry (170037) This folder is no longer available" when trying to register for now 3 courses including Search Under the Hood, Data Models and Introduction to Ent... See more...
Hello, I am getting an error message "Sorry (170037) This folder is no longer available" when trying to register for now 3 courses including Search Under the Hood, Data Models and Introduction to Enterprise Security. what is  going on? 
Hello,   While trying to deploy the ES using the Deployer GUI, I want to Enable SSL However I faced the below:  
Hi, I installed a splunk app and events are sent to default index. But I need to change the index to be a custom index. I tried to create  local/inputs.conf file and repackaged the app. The app was ... See more...
Hi, I installed a splunk app and events are sent to default index. But I need to change the index to be a custom index. I tried to create  local/inputs.conf file and repackaged the app. The app was rejected when I uploaded it to splunk cloud even if I changed the appID.    I also looked at Splunk ACS API, but could not figure out if that can be used to customize configuration files and what are the endpoint URL to use. thanks in advance.
See https://community.splunk.com/t5/Splunk-Search/Upgrade-to-5-x-some-of-my-existing-searches-are-taking-longer-to/m-p/158429
We have a TrueSight integration with Splunk that is sending results when a certain event occurs. Sometimes no events are being sent, and I want to document only the first time when it happens, for e... See more...
We have a TrueSight integration with Splunk that is sending results when a certain event occurs. Sometimes no events are being sent, and I want to document only the first time when it happens, for example: Time 0 5 10 15 20 25 30 35 40 45 50 55 0 5 10 15 20 25 30 # of Events 3 4 0 0 0 8 15 2 0 5 55 66 0 0 0 0 0 8 9   I want to include also 0 values that occurs only the first time and not all the times when we have an alert.   Please assist  
I'm new to Splunk and trying to display table in the below format after reading data from json. Could someone help me with the splunk query. Transaction Name pct2ResTime Transaction 1  4198 ... See more...
I'm new to Splunk and trying to display table in the below format after reading data from json. Could someone help me with the splunk query. Transaction Name pct2ResTime Transaction 1  4198 Transaction 2 1318 Transaction 3 451 JSON file name: statistics.json {   “Transaction1” : {     "transaction" : "Transaction1”,     "pct1ResTime" : 3083.0,     "pct2ResTime" : 4198.0,     "pct3ResTime" : 47139.0   },   "Transaction2” : {     "transaction" : "Transaction2”,     "pct1ResTime" : 1151.3000000000002,     "pct2ResTime" : 1318.8999999999996,     "pct3ResTime" : 6866.0   },   "Transaction3” : {     "transaction" : "Transaction3”,     "pct1ResTime" : 342.40000000000003,     "pct2ResTime" : 451.49999999999983,     "pct3ResTime" : 712.5799999999997   } }
Hi,  I have created a new token under Settings > Access Tokens And by right I should be getting a token ID to be copied immediately (for use elsewhere). However after creating and waiting on multi... See more...
Hi,  I have created a new token under Settings > Access Tokens And by right I should be getting a token ID to be copied immediately (for use elsewhere). However after creating and waiting on multiple tokens, I cannot see this token ID to be copied anywhere.   Could I get some help with knowing where or how to copy this token ID?  Thank you!
Trying to get success and failure status count using below query but its not filtering out the duplicate URLs, Can someone help me into this? I want result in less number of rows but lob, URI, API_St... See more...
Trying to get success and failure status count using below query but its not filtering out the duplicate URLs, Can someone help me into this? I want result in less number of rows but lob, URI, API_Status  and its count should show. "*/prescriptions/eni/api/api-cw/*" (URI != "*/prescriptions/eni/api/api-cw/legacySession/cache*") | stats count by lob,URI,API_Staus Result is coming as below,  
We have a very vanilla SC4S configuration that has been working flawlessly with a cron job to do "service sc4s restart" every night to upgrade.  We just discovered that a few nights ago, it did not c... See more...
We have a very vanilla SC4S configuration that has been working flawlessly with a cron job to do "service sc4s restart" every night to upgrade.  We just discovered that a few nights ago, it did not come back from this nightly restart. When examining the journal with this command: journalctl -b -u sc4s We see this: Error response from daemon: pull access denied for splunk/scs, repository does not exist or may require 'docker login': denied: requested access to the resource is denied This problem could happen to ANYBODY at ANY TIME and it took us a while to complete work around it so I am documenting the whole story here.
I'm trying to create an alert that looks through a given list of indexes and triggers an alert for each index showing zero results within a set timeframe. I'm trying with the following search:    ... See more...
I'm trying to create an alert that looks through a given list of indexes and triggers an alert for each index showing zero results within a set timeframe. I'm trying with the following search:    | tstats count where index IN (index1, index2, index3, index4, index5) BY index | where count=0     But this doesn't work because running the first line on its own only shows the indexes that are not empty and nothing, not even count=0 for the empty index. I also tried    | tstats count where index IN (index1, index2, index3, index4, index5) BY index | fillnull count value=0 | where count=0   But that doesn't work either. The problem is that if "index5", for example, is showing no results, "| tstats count..." doesn't return anything, not even a null result. So something like "| fillnull" is not working at the end because there is no "index5" row to "fillnull".  I have seen other solutions use    | rest /services/data/indexes ...   and join or append the searches to each other but since I'm on Splunk Cloud, it doesn't work due to the error "Restricting results of the "rest" operator to the local instance because you do not have the "dispatch_rest_to_indexers" capability".    The only working solution I have so far is to create an alert for each index I want to monitor with the following search   | tstats count where index=<MY_INDEX> | where count=0   but I would rather have a single alert running each time with a list that I can change if I need to than multiple searches competing for a timeslot and all that. I have considered other solutions like providing a lookup table with a list of indexes I want to search and using lookup to compare against the results but that seems too cumbersome.    Is there a way to trigger an alert for empty indexes from a single given list on Splunk Cloud?    
Using "Securing the Splunk platform with TLS" I have converted Microsoft provided certificates to pem format and verified with the "openssl verify -CAfile "CAfile.pem" "Server.pem" "  command. TLS c... See more...
Using "Securing the Splunk platform with TLS" I have converted Microsoft provided certificates to pem format and verified with the "openssl verify -CAfile "CAfile.pem" "Server.pem" "  command. TLS configuration of the web interface using web.conf is successful. TLS configuration of forwarder to indexer has failed consistently using the indexer server.conf file and the forwarder server.conf file as detailed in the doc. Our deployment is very simple; 1 indexer and a collection of windows forwarders. Has anyone been able to get TLS working between forwarder - indexer on version 9+ ? Any tips on splunkd.log entries that may point to the issue(s)?   Thanks for any help. I will be out of office next week but will return Dec 30 and check this. Thanks again.  
Akamai dashboard is viewable for sc - admins but not users. Only app with this issue.
I'm trying to optimize the alerts since I'm having issues. Where I work, it's somewhat slow to solve the problem (1 to 3 days) when the alert is triggered. This causes the alert to constantly trigger... See more...
I'm trying to optimize the alerts since I'm having issues. Where I work, it's somewhat slow to solve the problem (1 to 3 days) when the alert is triggered. This causes the alert to constantly trigger in the given time. I can't use Throttle since my alerts do not depend on a single host or event. For example: index=os_pci_windowsatom host IN (HostP1 HostP2 HostP3 HostP4) source=cnt_mx_pci_sql_*_status_db |dedup 1 host state_desc | streamstats values(state_desc) as State by host | eval Estado=case( State!="ONLINE", "Critico", State="ONLINE", "Safe" ) | table Estado host State _time | where Estado="Critico" When the status of a Host changes to critical, it triggers the alert. For this reason, I cannot use Throttle because in the time span that this alert is silenced, one of the hosts may trigger, omitting the entire alert completely. My idea is to create logic based on the results of the last triggered alert and compare them with the current alert where if the host and status are the same, it remains unchanged. However, if the host and status are different from the previous one triggered, it should be triggered. I thought about using the data where it's stored, but I don't know how to search for this information, does anyone have an idea? e Any comment is greatly appreciated.
Hello all, I have the following case: Splunk accessible on https://dh2.mydomain.com/sendemail931 with "enable_spotlight_search = true" in web-features.conf. If I search for anything and a result/mat... See more...
Hello all, I have the following case: Splunk accessible on https://dh2.mydomain.com/sendemail931 with "enable_spotlight_search = true" in web-features.conf. If I search for anything and a result/match is shown upon clicking I get "The requested URL was not found on this server.", because the root_endpoint is being removed from the URL. Splunk is behind a reverse proxy (httpd) and an applicaiton load balancer. So upon clicking on the result, I'm being redirected to: https://dh2.mydomain.com/manager/launcher/admin/alert_actions/email?action=edit, but it should be  https://dh2.mydomain.com/sendemail931/en-US/manager/launcher/admin/alert_actions/email?action=edit I'm pretty sure that the redirect is happening internally, because I cannot see any relevant logs on the apache. I've tried to add the following to the web.conf, but the result is the same:     tools.proxy.base = https://dh2.mydomain.com/sendemail931/ tools.proxy.on = true     This is the only case were root_endpoint is not preserved. I've tried to reverse-engineer why this could happen and found that the request is handled by common.min.js, I guess somewhere here: {title:(0,r._)("Alert actions"),id:"/servicesNS/nobody/search/data/ui/manager/alert_actions",description:(0,r._)("Review and manage available alert actions"),url:"/manager/".concat(e,"/alert_actions"),keywords:[]}  + here: {var o=m.default.getSettingById(r);if(void 0===o)return;return(0,b.default)(o.title,o.url,O.length),n=o.url,void(window.location.href=n)  
Hello to everyone! I'm not sure how to correctly name this thing, but I will carefully try to explain what I want to achieve. In our infrastructure we have plenty of Windows Server instances with U... See more...
Hello to everyone! I'm not sure how to correctly name this thing, but I will carefully try to explain what I want to achieve. In our infrastructure we have plenty of Windows Server instances with Universal Forwarder installed. All servers are divided into groups according to the particular application that the servers host. For example, Splunk servers have group 'spl,' remote desktop session servers have group 'rdsh,' etc. Each server has an environment variable with this group value. By design, the access policy to logs was built on these groups. One group - one index. Because of it, each UF input stanza has the option "index = group.". According to this idea, introspection logs of UF agents are related to the SPL (or Splunk) group\index. And here the nuisance started. Sometimes UF agents report about errors that demand some things on the running hosts, for example, restarting the agent manually. I see these errors because I have access to the 'spl' index, but I don't have access to all Windows machines and I have to notify the machine owner about it manually. So, the question is how to create a sort of tag or field on the UF that can help me separate all Splunk UF logs by these groups? Maybe I can use our environment variable to achieve it? I only need to access this field during search time to create various alerts that will notify machine owners instead of me.
I am using same index for both stats disctinctcount and timechart distinctcount. But the results from timechart is always high. Anyone knows the reason behind it and how to resolve this? Also i have ... See more...
I am using same index for both stats disctinctcount and timechart distinctcount. But the results from timechart is always high. Anyone knows the reason behind it and how to resolve this? Also i have tried with bucket span and plotted the time chart. But the results were not matching with stats distinctcount. Please help.
there is a user lets say ABC and I want to check why his AD account is locked .
Dear Splunk Dev team,  One more simple typo issue:  Splunk fresh install 9.4.0 (last week's version 9.3.2 also had this issue, but i thought to wait to post this till next version) showing the warn... See more...
Dear Splunk Dev team,  One more simple typo issue:  Splunk fresh install 9.4.0 (last week's version 9.3.2 also had this issue, but i thought to wait to post this till next version) showing the warning msg - "Error in 'lookup' command: Could not construct lookup 'test_lenlookup, data'. See search.log for more details." (on older splunk versions i remember this search.log, but nowadays both search.log and searches.log are not available)   https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/WhatSplunklogsaboutitself as per what Splunk logs about itself, it should be "See searches.log for more details." one more bigger issue -both search.log or searches.log are not available. All these searches are not returning anything (the doc says that - The Splunk search logs are located in sub-folders under $SPLUNK_HOME/var/run/splunk/dispatch/. )       index=_* source="*search.log" OR index=_* source="*searches.log" OR index=_* source="C:\Program Files\Splunk\var\run\splunk\dispatch*"         will post this to Splunk Slack as well, thanks.  If any post helped you in anyway, pls consider adding a karma point, thanks.