All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

how would I do this process in Windows Server 2019 running Splunk?
Hello all, ClamAV detected Unix.Trojan.Gitpaste-9787170-0 in file Splunk_Research_detections.json. This file appears to be a large repository of security research information and we'd like to verify... See more...
Hello all, ClamAV detected Unix.Trojan.Gitpaste-9787170-0 in file Splunk_Research_detections.json. This file appears to be a large repository of security research information and we'd like to verify if this detection is a true concern or if it is a false positive. Threat detection file location: /opt/splunk/etc/apps/Splunk_Security_Essentials/appserver/static/vendor/splunk/Splunk_Research_detections.json Splunk version: 9.4.0 Splunk Security Essentials version: 3.8.1 ClamAV detection: Unix.Trojan.Gitpaste-9787170-0 ClamAV version: 1.4.1/27629 ClamAV definition dates: April 24, 2025 through May 05, 2025 Security Essentials was installed on April 25, 2025 and ClamAV detections began immediately during the first scan following the install.
It's been a close to a year now and I still have the same error when trying to use this app on the iPhone what gives????? @splunk Help!
The Akamai Guardicore Add-on for Splunk is not cloud compatible due to the SDK version being 1.6.8. Splunk Cloud requires a minimum SDK version of 2.0.1. Is it possible for the developer to upgrade t... See more...
The Akamai Guardicore Add-on for Splunk is not cloud compatible due to the SDK version being 1.6.8. Splunk Cloud requires a minimum SDK version of 2.0.1. Is it possible for the developer to upgrade to SDK 2.1.0 or another SDK 2.0.1 or higher? SDK github
We are encountering the same issues with Azure App Service and our Java apps. Infrastructure metrics are totally fine.  Failed to export spans. The request could not be executed. Full error message:... See more...
We are encountering the same issues with Azure App Service and our Java apps. Infrastructure metrics are totally fine.  Failed to export spans. The request could not be executed. Full error message: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Hi, I am using Splunk 9.4.1 and eventgen 8.1.2. In my sample file to generate events I have multiple events in the same sample file.  Sample file: key1={val_1};key2={val2} - Event 1 key1={val_1;k... See more...
Hi, I am using Splunk 9.4.1 and eventgen 8.1.2. In my sample file to generate events I have multiple events in the same sample file.  Sample file: key1={val_1};key2={val2} - Event 1 key1={val_1;key2={val2} - Event 2 in next line Now I need to generate a replacement value, any random GID and replace both val_1 in both the events with the same GID. That is I need to share this. But currently splunk eventgen is not sharing the value but a for each event within the file a new value is being generated.
I ran the following search in Search & Reporting: index=_internal (ERROR OR WARN) "jira". This returned 3 significant errors related to the API token. I’m unsure where I can find the file that conta... See more...
I ran the following search in Search & Reporting: index=_internal (ERROR OR WARN) "jira". This returned 3 significant errors related to the API token. I’m unsure where I can find the file that contains the API token for Jira in Splunk. i am working in splunk cloud environnement. I’ve attached screenshots of the errors for reference. Any help would be greatly appreciated! Thank you in advance.  
@livehybrid , it is in /opt/sc4s/local/context folder. Thanks
I found some fixes added in version 3.35.1 related to Citrix.  https://github.com/splunk/splunk-connect-for-syslog/releases?page=1 So I decided to update to the latest version 3.36.0. Unfortunately... See more...
I found some fixes added in version 3.35.1 related to Citrix.  https://github.com/splunk/splunk-connect-for-syslog/releases?page=1 So I decided to update to the latest version 3.36.0. Unfortunately, the issue remains.
Hi @u_m1580  I would look at using a tstats search as this will be more performant when there is data in the index. Something simple such as: |tstats count where index=<yourIndex> Then you can al... See more...
Hi @u_m1580  I would look at using a tstats search as this will be more performant when there is data in the index. Something simple such as: |tstats count where index=<yourIndex> Then you can alert where count=0, or add a | where count>0 and alert when there are no results.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
If all these things were documented, there wouldn't be a need for Answers, Splunk Trust and .conf! 
Huh, thanks, that works. This should be changed or at least clarified in the documentation
timewrap is looking for a hidden field called _span This should work | tstats count where index=my_index by _time span=1h | eval _span=3600 | timewrap 1w  This should give a warning index=my_inde... See more...
timewrap is looking for a hidden field called _span This should work | tstats count where index=my_index by _time span=1h | eval _span=3600 | timewrap 1w  This should give a warning index=my_index | timechart span=1h count | fields - _span | timewrap 1w
If you use timewrap without previously using the timechart command, you get a warning "The timewrap command is designed to work on the output of timechart. ". If the format is correct, it works thou... See more...
If you use timewrap without previously using the timechart command, you get a warning "The timewrap command is designed to work on the output of timechart. ". If the format is correct, it works though. For example, these two queries give the same output: | tstats count where index=my_index by _time span=1h | timewrap 1w index=my_index | timechart span=1h count | timewrap 1w  The first query is way faster in this case, but I get the warning mentioned above. (this is not about the tstats command, it is also possible to recreate timechart it with other commands iirc) The docs say: "You must use the timechart command in the search before you use the timewrap command. " (both SPL and SPL2 docs say this) Why is this the case though? Beside the docs and the warning, nothing hints towards this being correct, it works... Am I missing something? If not, is it possible to deactivate the warning?  
Hi, I am running splunk standalone 8.4.1 with Citrix add-on installed 8.2.3.  Also, I have SC4S running version 3.31.0. I configured Citrix to send syslog events to SC4S, and running a tcpdump in S... See more...
Hi, I am running splunk standalone 8.4.1 with Citrix add-on installed 8.2.3.  Also, I have SC4S running version 3.31.0. I configured Citrix to send syslog events to SC4S, and running a tcpdump in SC4S, I see those events arriving. According to the documentation, nothing else must be done at SC4S level. https://splunk.github.io/splunk-connect-for-syslog/3.31.0/sources/vendor/Citrix/netscaler/ Unfortunately, I don't see any Citrix event in splunk. I searched in index "netfw" and also filtered by sorcetype (sourcetype="citrix*" and index=*), in both cases, no events are in there. Other events, from our firewall, are reaching splunk without any issue via the same SC4S server. So I discarded network issues. Any idea about what could be happening? any SC4S logs that I could check? thanks a lot.
All, just need some advice. We have a customer that we are migrating across different cloud providers. Their current Splunk cluster is running on Ubuntu 20.04 (which goes end of life 31st of May). We... See more...
All, just need some advice. We have a customer that we are migrating across different cloud providers. Their current Splunk cluster is running on Ubuntu 20.04 (which goes end of life 31st of May). We want to add new nodes in the new cloud provider running on RHEL 9.x and extend the existing Splunk cluster. So for a shortwhile we will have a mix of Ubuntu and RHEL nodes running together in the same cluster. Splunk have said this is doable but not something they can guarantee as they are not responsible for the OS running on the cluster nodes. Below is the below migration plan we are proposing to the customer, once we get approval we will deploy a PoC to test the migration approach: 1. Ensure there is low latency and sufficient bandwidth between the two cloud providers 2. Deploy new RHEL nodes in new Cloud provider with the same version of Splunk 3. Add the RHEL nodes to the existing Ubuntu cluster and let the cluster synchronize the data to the new RHEL nodes 4. Following successful data synchronization and testing, the master cluster role will be transferred to one of the RHEL nodes 5. Finally, after a period of co-existence and validation, the existing Ubuntu nodes will be removed from the cluster (indexers and Search heads).   Any help or guidance appreciated.
Hi All. Using Splunk for collecting logs from different devices.  But logs from on  devices on the network , is not present on the splunk server. After some hours, the logs from that device is appea... See more...
Hi All. Using Splunk for collecting logs from different devices.  But logs from on  devices on the network , is not present on the splunk server. After some hours, the logs from that device is appearing on the Splunk server again.  In that period, where we missed logs from this device, there has not been any network changes, og changes on the client. We are looking for reason for this. The logs were missing for around 6 hours. from early in the morning.  Could it it be some memory issues on the server, or something with the index`es ? If there was some work for preparing for some kind of maintanence on the backen, could this have any effect on the Splunk server log preformance ? Device which we are missing logs from these hours, has been online all the time. Any tips, how and where to look/ troublshoote in the Splunk enviroment when logs are not present from on or more hosts ? Thanks in advance. DD  
Hi @u_m1580 , you could create an alert using a simple search index=<your_index>  that triggers if you have no events. If you need to check a list of hosts, it's different. Ciao. Giuseppe
Hi there, I would like to create a search to alert us based on an index not ingesting any event data by basing it off any field in our index
Hi @livehybrid ,   First API worked successfully and the User list has been extracted. But fetching the ID from the user list and passing it to the next API throws error.   Need some input on the... See more...
Hi @livehybrid ,   First API worked successfully and the User list has been extracted. But fetching the ID from the user list and passing it to the next API throws error.   Need some input on the python script. Getting "TypeError: string indices must be integers" error , when checked the response received in in DICT format.    for user in users:     user_id = user['name']     roles_response = requests.get(f'{controller_url}/controller/api/rbac/v1/users/{user_id}' , headers = headers)     roles = roles_response     print(f"User: {user['name']}, Roles: {roles}")