All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hey, I have implemented a GeneratingCommand splunk application that fetches data from an API and yields the results chunk after chunk. I am encountering an issue, where the event count on the top l... See more...
Hey, I have implemented a GeneratingCommand splunk application that fetches data from an API and yields the results chunk after chunk. I am encountering an issue, where the event count on the top left seems funky - it shows `50000 of 0 events matched` and after the next chunk is fetched `100000 of 0 events matched` and so on. I would like to know if and how it's possible to update the `0` counter from within my application, I know the total amount of scanned events from the very first reply I get from the API, but even if it's not possible to set to any desired number I would at least expect it to be possible to "match" the left side of that is increased on every yield... Thanks in advance, Alon
I am trying to use the ai prompt in Splunk Machine Learning Toolkit 5.6.0 in order to use Llama Guard Model 4,note that it does not require an access token for it. I am trying to test out the followi... See more...
I am trying to use the ai prompt in Splunk Machine Learning Toolkit 5.6.0 in order to use Llama Guard Model 4,note that it does not require an access token for it. I am trying to test out the following prompt and I keep getting the following error. Please assist with any help or any correct format to test for using Llama Guard 4. Test Prompt: index=_internal log_level=error | table _time _raw | ai prompt="Please summarise these error messages and describe why I might receive them: {_raw}" Error Message: SearchMessage orig_component=SearchOrchestrator sid=[sid] message_key= message=Error in 'ai' command: No default model was found.
Hello, I would like to know if there is a consumption gap between this 2 indexation mode in the splunk cloud license usage. I mean, which one will cost the most, with structured log(json). What I u... See more...
Hello, I would like to know if there is a consumption gap between this 2 indexation mode in the splunk cloud license usage. I mean, which one will cost the most, with structured log(json). What I understand: indexed_extractions=json ==> fields are extracted at index time and could increase the size of tsidx and so license usage and cost kv_mode=json ==> fields extracted at search time, and should not impact license usage. Am I correct? Thanks for your confirmation Regards Nordine
Hi Splunker, I tried to enable/disable with API, but I encountered problems with token authentication. I always get the following error. I have also adjusted the API information, but I still can't... See more...
Hi Splunker, I tried to enable/disable with API, but I encountered problems with token authentication. I always get the following error. I have also adjusted the API information, but I still can't solve this problem. curl -v -X POST -k -H "Authorization: Bearer dc73xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" "https://mysplunkserver:8089/servicesNS/nobody/my_app/saved/searches/testalertapi" -d enabled=0   It will be really great if you could share some working examples somewhere in your documentation.  Thanks in advance!
I can't collect data which is test type is set to “Dynamic” on Thousandeyes. We are currently unable to retrieve data for "Dynamic" test types via the Test Stream data input configuration in the C... See more...
I can't collect data which is test type is set to “Dynamic” on Thousandeyes. We are currently unable to retrieve data for "Dynamic" test types via the Test Stream data input configuration in the Cisco ThousandEyes App for Splunk. In the Ingest settings of the Cisco ThousandEyes App for Splunk, under the "Tests Stream" configuration, test types set as “HTTP Server” appear correctly under Endpoint Tests and data is successfully ingested into Splunk. However, test types set as “Dynamic” do not appear at all and cannot be ingested into Splunk. Since data configured as "HTTP Server" is being successfully ingested, we do not believe this is a communication issue between ThousandEyes and Splunk. Could you please advise how we can ingest Tests Streams that are configured as “Dynamic”?
Hello Getting this error on DocuSign Monitor Add-on 1.1.3: ERROR: Could not parse the provided public key I haven't provided any public key, so wondering what this is about. Thanks for any help. ... See more...
Hello Getting this error on DocuSign Monitor Add-on 1.1.3: ERROR: Could not parse the provided public key I haven't provided any public key, so wondering what this is about. Thanks for any help. Lionel
I’ve already released a Splunk Add-on with one input and user configurations. Now, I’ve added a new input and made UI changes in the Configuration page. I want to simulate a customer upgrade locally... See more...
I’ve already released a Splunk Add-on with one input and user configurations. Now, I’ve added a new input and made UI changes in the Configuration page. I want to simulate a customer upgrade locally by: Installing the existing released version Adding sample configurations Upgrading it with the new version Testing if existing settings are retained and the new input/UI works without issues Could you guide me on: Best practices for local upgrade testing Ensuring configurations persist after upgrade Any tools or logs to verify migration behavior
Having some issues when looking at docker hec logs. The data is showing two sources at the same time, but does not filter on stderr or stdout when using source=stderr. { [-] line: clusterrolebind... See more...
Having some issues when looking at docker hec logs. The data is showing two sources at the same time, but does not filter on stderr or stdout when using source=stderr. { [-] line: clusterrolebinding.rbac.authorization.k8s.io/ucp-kube-system:calico-node:crds unchanged source: stdout tag: $ucp-kubectl - 9382ee9db872 } Show as raw text host = omit    source = http:syslog source = stdout  sourcetype = dnrc:docker
Greetings! I lead the development for 3 interactive React/SUIT apps, and before I go down a rabbit trail testing a few ideas, I wondered if anyone had already found something that might suit my requ... See more...
Greetings! I lead the development for 3 interactive React/SUIT apps, and before I go down a rabbit trail testing a few ideas, I wondered if anyone had already found something that might suit my requirement. Essentially, when promoting a new version (with some added feature(s), etc.) to Production, users have to open their browser's Dev tools, long-click on the browser's Reload button, and select "Empty Cache and Hard Reload". Understandably, they do not like having to take this step. I have some ideas around incrementing file names to avoid this, but just thought I'd check here to see if anyone else had already come up with a method. Thanks!
The current Netscaler guidance is that logs should be exported via HEC. However, it seems like the app doesn't have a sourcetype for HEC. Any guidance on that? https://docs.netscaler.com/en-us/citri... See more...
The current Netscaler guidance is that logs should be exported via HEC. However, it seems like the app doesn't have a sourcetype for HEC. Any guidance on that? https://docs.netscaler.com/en-us/citrix-adc/current-release/observability/auditlogs-splunk-integration.html
We recently updated from Enterprise Security 7.3.2 to 8.0.4      Correlation searches are not updating the risk index.  I can write directly to the risk index, however any "correlation search" (now ... See more...
We recently updated from Enterprise Security 7.3.2 to 8.0.4      Correlation searches are not updating the risk index.  I can write directly to the risk index, however any "correlation search" (now finding) that is configured to perform risk analysis and has the risk object defined, does not update the risk index.    
Hi, how can I reply to the sending endpoint not using return as I want to keep the connection open? My endpoint works as long as I respond via the return through the appserver but I do not want to ... See more...
Hi, how can I reply to the sending endpoint not using return as I want to keep the connection open? My endpoint works as long as I respond via the return through the appserver but I do not want to close the connection. I tried using yield with json.dumps but then the appserver throws a serialization error. Doing this async it also something the appserver does not like. How can I do something like: response {'a':'1'} ... do something return {'a':'2'} Sample, this works: return {'payload': '','status': 200}   and this does not: yield jsons.dumps({'payload': '','status': 200})   Thanks Peter
We are experiencing consistent log duplication and data loss when the Splunk Universal Forwarder (UF) running as a Helm deployment inside our EKS cluster is restarted or redeployed. Environment Deta... See more...
We are experiencing consistent log duplication and data loss when the Splunk Universal Forwarder (UF) running as a Helm deployment inside our EKS cluster is restarted or redeployed. Environment Details: Platform: AWS EKS (Kubernetes) UF Deployment: Helm chart Splunk UF Version: 9.1.2 Indexers: Splunk Enterprise 9.1.1 (self-managed) Source Logs: Kubernetes container logs (/var/log/containers, etc.)   Symptoms: After UF pod restarts/re-deployed: Previously ingested logs are duplicated. Logs that were generated during the restart window are missing(not all logs) in Splunk. The fishbucket is recreated at each restart: Confirmed by logging into the UF pod post-restart and checking: /opt/splunkforwarder/var/lib/splunk/fishbucket/ Timestamps indicate it is freshly recreated (ephemeral).   Our Hypothesis: We suspect this behavior is caused by the Splunk UF losing its ingestion state (fishbucket) on pod restart, due to the lack of a PersistentVolumeClaim (PVC) mounted to: /opt/splunkforwarder/var/lib/splunk   This would explain both: Re-ingestion of previously-read files (-> duplicates) Fail to re-ingest certain logs that may no longer be available or tracked (-> causing data loss) However, we are not yet certain if the missing logs are due to non-persistent fishbucket and container log rotation What We Need from Splunk Support: How can we conclusively verify whether the missing logs are caused by fishbucket loss, file rotation, inode mismatch, or other ingestion tracking issues? What is the recommended and supported approach for maintaining ingestion state in a Kubernetes/Helm-based Splunk UF deployment? Is mounting a PersistentVolumeClaim (PVC) to /opt/splunkforwarder/var/lib/splunk sufficient and reliable for preserving fishbucket across pod restarts? Are there additional best practices to prevent both log loss and duplication, especially in dynamic environments like Kubernetes?
Hi Staff, we have a distributed systems with 1 Splunk enterprise and N Heavy forwarder pushing data to it. We would like to backup every night one .conf file inside the Heavy forwarder  directly in... See more...
Hi Staff, we have a distributed systems with 1 Splunk enterprise and N Heavy forwarder pushing data to it. We would like to backup every night one .conf file inside the Heavy forwarder  directly into a specific folder of the enterprise machine by using the same port 9997 or 8089 avoiding any other port configuration. Is this possible? How can we get the right solution? Thanks in advance. Nick  
Hi, I’m looking for query which helps me to find if login is successful or not. Unfortunately, there is no direct log which would show this, so I need to use following logic: If there is EventID 1... See more...
Hi, I’m looking for query which helps me to find if login is successful or not. Unfortunately, there is no direct log which would show this, so I need to use following logic: If there is EventID 1000, check if there is following EventID 1001 with the same filed called Username in time range of 1s If EventID with above conditions exist – Status=SUCCESSS If EventID with above conditions doesn’t exist – Status=FAILURE Disaply table with following fields with match both events: _Time of event 1000 Computer from event 1000 Status Resource from event 1001 Is it possible to get this in Splunk?
Hello fellow ES 8.X enjoyer. We have a few Splunk Cloud customer that got upgrade to ES 8.1. We have noticed that all the drill down searches from Mission Control use the time rage "All time", event... See more...
Hello fellow ES 8.X enjoyer. We have a few Splunk Cloud customer that got upgrade to ES 8.1. We have noticed that all the drill down searches from Mission Control use the time rage "All time", eventhough we configured the earliest and latest offset with $info_min_time$ and $info_max_time$: After saving the search again the problem vanished. I also created a new search and worked correct immediately. It worked before the update for the existing searches and stopped working after the upgrade.  Anybody else with the same experience?  Best regards  
Hello, im facing a problem on my Dbx connect :  Cannot communicate with task server, please check your settings.   DBX Server is not available, please make sure it is started and listening on ... See more...
Hello, im facing a problem on my Dbx connect :  Cannot communicate with task server, please check your settings.   DBX Server is not available, please make sure it is started and listening on 9998 port or consult documentation for details.   did you have a idea ?   We use Splunk Enterprise 9.2.1          
I have used a file upload field on the configuration page. I successfully uploaded the file using this field. However, when I edit the configuration, all other fields are prefilled with the... See more...
I have used a file upload field on the configuration page. I successfully uploaded the file using this field. However, when I edit the configuration, all other fields are prefilled with the previously saved values, except the file upload field. The file field does not get prefilled with the saved value. Is this the expected behavior, or is there any configuration I need to update to achieve this?
Has anyone managed to create an SELinux policy that confines Splunk Forwarder while not limiting it's functions? I'm trying to address cis-benchmark "Ensure no unconfined services exist", as splun... See more...
Has anyone managed to create an SELinux policy that confines Splunk Forwarder while not limiting it's functions? I'm trying to address cis-benchmark "Ensure no unconfined services exist", as splunkd fails the test: system_u:system_r:unconfined_service_t:s 0 11315 ? 00:00:40 splunkd In #act, two process instances are seen (not sure why).   # ps -eZ | grep "unconfined_service_t" system_u:system_r:unconfined_service_t:s0 11379 ? 00:29:50 splunkd system_u:system_r:unconfined_service_t:s0 11402 ? 00:02:28 splunkd   "Advice" seems to be as follows: "Determine if the functionality provided by the unconfined service is essential for your operations. If it is, you may need to create a custom SELinux policy to confine the service. Create Custom SELinux Policy: If the service needs to be confined, create a custom SELinux policy. For the splunkd service, we need to determine if it can be confined without disrupting its functionality. If splunkd requires unconfined access to function correctly, confining it might lead to degraded performance or loss of functionality. " This has proven to be very, very difficult, especially as I ultimately need to make this happen using Ansible automation. Thoughts? Solutions? Anything?  
Hi guys, I am new here and I want to explore some things in splunk. I have a txt file, I uploaded it and I want to get the logs in this file by combining them according to a certain format. For exam... See more...
Hi guys, I am new here and I want to explore some things in splunk. I have a txt file, I uploaded it and I want to get the logs in this file by combining them according to a certain format. For example, a log that starts with line D and ends with line F. I created a .conf file for this and restarted splunk, but does it also affect the existing logs, do I need to throw these logs again, so how can I delete the existing one and throw it again. What is your view of the whole event?