All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello all, ClamAV detected Unix.Trojan.Gitpaste-9787170-0 in file Splunk_Research_detections.json. This file appears to be a large repository of security research information and we'd like to verify... See more...
Hello all, ClamAV detected Unix.Trojan.Gitpaste-9787170-0 in file Splunk_Research_detections.json. This file appears to be a large repository of security research information and we'd like to verify if this detection is a true concern or if it is a false positive. Threat detection file location: /opt/splunk/etc/apps/Splunk_Security_Essentials/appserver/static/vendor/splunk/Splunk_Research_detections.json Splunk version: 9.4.0 Splunk Security Essentials version: 3.8.1 ClamAV detection: Unix.Trojan.Gitpaste-9787170-0 ClamAV version: 1.4.1/27629 ClamAV definition dates: April 24, 2025 through May 05, 2025 Security Essentials was installed on April 25, 2025 and ClamAV detections began immediately during the first scan following the install.
The Akamai Guardicore Add-on for Splunk is not cloud compatible due to the SDK version being 1.6.8. Splunk Cloud requires a minimum SDK version of 2.0.1. Is it possible for the developer to upgrade t... See more...
The Akamai Guardicore Add-on for Splunk is not cloud compatible due to the SDK version being 1.6.8. Splunk Cloud requires a minimum SDK version of 2.0.1. Is it possible for the developer to upgrade to SDK 2.1.0 or another SDK 2.0.1 or higher? SDK github
Hi, I am using Splunk 9.4.1 and eventgen 8.1.2. In my sample file to generate events I have multiple events in the same sample file.  Sample file: key1={val_1};key2={val2} - Event 1 key1={val_1;k... See more...
Hi, I am using Splunk 9.4.1 and eventgen 8.1.2. In my sample file to generate events I have multiple events in the same sample file.  Sample file: key1={val_1};key2={val2} - Event 1 key1={val_1;key2={val2} - Event 2 in next line Now I need to generate a replacement value, any random GID and replace both val_1 in both the events with the same GID. That is I need to share this. But currently splunk eventgen is not sharing the value but a for each event within the file a new value is being generated.
If you use timewrap without previously using the timechart command, you get a warning "The timewrap command is designed to work on the output of timechart. ". If the format is correct, it works thou... See more...
If you use timewrap without previously using the timechart command, you get a warning "The timewrap command is designed to work on the output of timechart. ". If the format is correct, it works though. For example, these two queries give the same output: | tstats count where index=my_index by _time span=1h | timewrap 1w index=my_index | timechart span=1h count | timewrap 1w  The first query is way faster in this case, but I get the warning mentioned above. (this is not about the tstats command, it is also possible to recreate timechart it with other commands iirc) The docs say: "You must use the timechart command in the search before you use the timewrap command. " (both SPL and SPL2 docs say this) Why is this the case though? Beside the docs and the warning, nothing hints towards this being correct, it works... Am I missing something? If not, is it possible to deactivate the warning?  
Hi, I am running splunk standalone 8.4.1 with Citrix add-on installed 8.2.3.  Also, I have SC4S running version 3.31.0. I configured Citrix to send syslog events to SC4S, and running a tcpdump in S... See more...
Hi, I am running splunk standalone 8.4.1 with Citrix add-on installed 8.2.3.  Also, I have SC4S running version 3.31.0. I configured Citrix to send syslog events to SC4S, and running a tcpdump in SC4S, I see those events arriving. According to the documentation, nothing else must be done at SC4S level. https://splunk.github.io/splunk-connect-for-syslog/3.31.0/sources/vendor/Citrix/netscaler/ Unfortunately, I don't see any Citrix event in splunk. I searched in index "netfw" and also filtered by sorcetype (sourcetype="citrix*" and index=*), in both cases, no events are in there. Other events, from our firewall, are reaching splunk without any issue via the same SC4S server. So I discarded network issues. Any idea about what could be happening? any SC4S logs that I could check? thanks a lot.
Hi All. Using Splunk for collecting logs from different devices.  But logs from on  devices on the network , is not present on the splunk server. After some hours, the logs from that device is appea... See more...
Hi All. Using Splunk for collecting logs from different devices.  But logs from on  devices on the network , is not present on the splunk server. After some hours, the logs from that device is appearing on the Splunk server again.  In that period, where we missed logs from this device, there has not been any network changes, og changes on the client. We are looking for reason for this. The logs were missing for around 6 hours. from early in the morning.  Could it it be some memory issues on the server, or something with the index`es ? If there was some work for preparing for some kind of maintanence on the backen, could this have any effect on the Splunk server log preformance ? Device which we are missing logs from these hours, has been online all the time. Any tips, how and where to look/ troublshoote in the Splunk enviroment when logs are not present from on or more hosts ? Thanks in advance. DD  
Hi there, I would like to create a search to alert us based on an index not ingesting any event data by basing it off any field in our index
Hey everyone, I have a question on Splunk Cloud Index MaxSize. I am having an issue with Splunk Cloud Index MaxSize. My index max size is set to 500GB, but the current size has reached 530GB, and... See more...
Hey everyone, I have a question on Splunk Cloud Index MaxSize. I am having an issue with Splunk Cloud Index MaxSize. My index max size is set to 500GB, but the current size has reached 530GB, and some latest events (from last week) are not in the index but are going to archive storage. We have 3 months of searchable retention and 3 months of archive, and the archive dashboard is showing the latest event from last week. We have 8 indexers, which are clustered, and two dedicated search heads (not clustered). My question is, can I update the index maxsize (to unlimited) on the GUI, and will it replicate to all the indexers and 2 search heads, or should I open a support case for that? The second question is, can I restore the logs that went to archiving due to a maxsize issue to a searchable index again?
I have abunch of Splunk universal forwarder which runs on the version 6.6.3 - Linux machines. Im looking forward to upgrade them to 8.0.x .  Am i good enough todo the straight upgrade from 6.6.3 t... See more...
I have abunch of Splunk universal forwarder which runs on the version 6.6.3 - Linux machines. Im looking forward to upgrade them to 8.0.x .  Am i good enough todo the straight upgrade from 6.6.3 to 8.0.x? and my splunk Enterprises are in the version of 8.2.7 . As i next plane, we will also be updating the splunk enterprises to 9.x.x series. if  i go ahead, and update Splunk enterprise to version 9.0 , i hope UF with 6.6.3 is not compatible with 9.0 as per the official doc.  QA: 1.I can do straight upgrade from 6.6.3 to 8.0.x? 2.how do i get the older version UF packages , required tgz,rpm and msi . Request suggestions and guidance pls.  #universalforwarder6.6.3 #universalforwarder8.0.x #Linux #upgradation
Newly installed Universal forwarders on windows servers are forwarding logs to Splunk Cloud but newly installed forwarders name is not coming up in forwarders list in Cloud Monitoring Console. What... See more...
Newly installed Universal forwarders on windows servers are forwarding logs to Splunk Cloud but newly installed forwarders name is not coming up in forwarders list in Cloud Monitoring Console. What could be the reason?
I am trying to do a query that will search for arbitrary strings, but will ignore if the string is/isn't in a specific field. I still want to see the results from that field, though. Example: index... See more...
I am trying to do a query that will search for arbitrary strings, but will ignore if the string is/isn't in a specific field. I still want to see the results from that field, though. Example: index = my_index AND *widget* | <ignore> my_field_42 Whether my_field_42 contains the word "widget" or not should not matter to the search, but it should still show it's field values in the results.  Result 1: my_field_1 = "hello world" my_field_2 = "AwesomeWidget69" ... my_field_42 = "your mom" Result 2: my_field_1 = "hello world" my_field_23 = "Widgets are cool" ... my_field_42 = "Look, a widget!"  
Hi, I recently had an issue where my SHCluster was throwing Kvstore errors. The Kvstore status was abnormal. The resolution was checking the server.pem expiration date in /opt/splunk/etc/auth use... See more...
Hi, I recently had an issue where my SHCluster was throwing Kvstore errors. The Kvstore status was abnormal. The resolution was checking the server.pem expiration date in /opt/splunk/etc/auth use >>> openssl x509 -in /opt/splunk/etc/auth/server.pem -noout -text After removing the server.pem and restarting, Kvstore was back up. Does anyone have a way to monitor the expiration dates for all the server.pem(s) in the deployment? Thanks
Trying to use time tokens in dashboard studio under sub search, $time.earliest$ and $time.latest$ works for Presets - Today & Yesterday. But doesn't if date range is selected. Can someone kindly hel... See more...
Trying to use time tokens in dashboard studio under sub search, $time.earliest$ and $time.latest$ works for Presets - Today & Yesterday. But doesn't if date range is selected. Can someone kindly help.   | inputlookup daily_distinct_count.csv | rename avg_dc_count as avg_val | search Page="Application" | eval _time=relative_time(now(), "-1d@d"), value=avg_val, Page="Application" | append [ search index="143576" earliest=$token.earliest$ latest=$token.latest$ | eval Page=case( match(URI, "Auth"),  "Application", true(), "UNKNOWN" ) | where Page="Application" | stats dc(user) as value | eval _time=now(), Page="Application" ] | table _time Page value | timechart span=1d latest(value) as value by Page
I'm attempting to suppress an alert if a follow up event (condition) is received within 60 seconds of the initial event (condition) from the same host.  This is a network switch alerting for BFD neig... See more...
I'm attempting to suppress an alert if a follow up event (condition) is received within 60 seconds of the initial event (condition) from the same host.  This is a network switch alerting for BFD neighbor down event.  I want to suppress the alert if a BFD neighbor up event is received within 60 seconds. This is the event data received: Initial BFD Down: 2025-05-07T07:20:40.482713-04:00 "switch_name" : 2025 May 7 07:20:40 EDT: %BFD-5-SESSION_STATE_DOWN: BFD session 1124073489 to neighbor "IP Address" on interface Vlan43 has gone down. Reason: Administratively Down. host = "switch_name" Second event to nullify the alert: 2025-05-07T07:20:41.482771-04:00 "switch_name" : 2025 May 7 07:20:41 EDT: %BFD-5-SESSION_STATE_UP: BFD session 1124073489 to neighbor "IP Address" on interface Vlan43 is up. host = "switch_name"  
Hello everybody! The problem that I have is that when I try to make a Backup of the KVStore on my Search Head, it fails after it is done dumping or while dumping the data.  Splunk tells me to look ... See more...
Hello everybody! The problem that I have is that when I try to make a Backup of the KVStore on my Search Head, it fails after it is done dumping or while dumping the data.  Splunk tells me to look into the logs but besides some basic info that the backup has failed I cant find any info in splunkd and mongo logs. From my understanding, it is important that, since I'm using the point_in_time option, I have to make sure no searches are writing into the KV Store when I start the backup. Since Splunk makes a Snapshot of the moment I'm starting the backup, searches that modify the KVStores afterwards shoudln't impact the backup, right? I made sure no searches have the running status when starting the Backup. Does anybody have tips or threads that are about this topic? I thought about stopping the scheduler during the backup, but since there are important searches running I want to look into all the options I have before taking drastic measures. Thanks for any Tips and Hints in Advance!
Hi Team, We are getting the Dynatrace metrics and log4j logs to Splunk ITSI. Currently we created the universal correlation search manually (which needs fine tuning whenever needed) for grouping not... See more...
Hi Team, We are getting the Dynatrace metrics and log4j logs to Splunk ITSI. Currently we created the universal correlation search manually (which needs fine tuning whenever needed) for grouping notable events.  So, does Splunk ITSI or any Splunk Products provides their own AI model to perform the automatic event correlation without any manual intervention? Any inputs are much appreciated. Please let me know if any additional details are required. Thank you.
I have multiple formats of json data coming in from Azure Keyvault. I can't seem to get the linebreaking to work properly and Splunk AddOn for Microsoft Cloudservices doesn't provide any props for ma... See more...
I have multiple formats of json data coming in from Azure Keyvault. I can't seem to get the linebreaking to work properly and Splunk AddOn for Microsoft Cloudservices doesn't provide any props for many of these json blobs. ( multiple matching lines per ingested event } { "count": 1, "total": 1, "minimum": 1, "maximum": 1, "average": 1, "resourceId": "/SUBSCRIPTIONS/blah/blah", "time": "2025-05-07T14:08:00.0000000Z", "metricName": "ServiceApiHit", "timeGrain": "PT1M"} { "count": 1, "total": 14, "minimum": 14, "maximum": 14, "average": 14, "resourceId": "/SUBSCRIPTIONS/blah/blah", "time": "2025-05-07T14:08:00.0000000Z", "metricName": "ServiceApiLatency", "timeGrain": "PT1M"} And some look like this: { "time": "2025-05-07T14:07:58.7286344Z", "category": "AuditEvent", ....... "13"} { "time": "2025-05-07T14:08:02.8617508Z", "category": "AuditEvent", ....... "13"} I've tried numerous combinations of regexes ... nothing's working. LINE_BREAKER = (\}([\r\n]\s*,[\r\n]\s*){|\{\s+\"(count|time)\") Any suggestions would be greatly helpful.
Hello Team,    Is there a way to use Splunk with Cisco Contact Centers and real time data? 
Hi all, I'm struggling with an issue related to collecting Fortinet Fortios events through SC4S. If I use UDP protocol I have no issues, but when changing the collecting protocol to TCP the events a... See more...
Hi all, I'm struggling with an issue related to collecting Fortinet Fortios events through SC4S. If I use UDP protocol I have no issues, but when changing the collecting protocol to TCP the events are not interpreted correctly, because the line breaking does not work anymore, and basically I receive a buffer of merged events breaked only by the _raw size limit. My config is the following: Fortinet FW --> (Syslog TCP) --> SC4S --> HEC on Indexer (Splunk Cloud) --> Search Head (Splunk Cloud) If I receive the same events directly with a Splunk instance where the "Fortinet FortiGate Add-On for Splunk" the configuration correctly breaks the events. Here is the additional configuration needed. [source::tcp:1514] SHOULD_LINEMERGE = false LINE_BREAKER = (\d{2,3}\s+<\d{2,3}>) TIME_PREFIX = eventtime= TIME_FORMAT = %s%9N If I try to apply this configuration on the Splunk Cloud SH it does not work. I believe that SC4S or the indexer is not permitting to perform this line breaking configuration on the SH, so I'm unable to make it work. Maybe it's possible to apply some adjustement on SC4S, if anyone already solves this. Regards
I have installed & configured  microsoft_o365_email_add_on_for_splunk but not getting log in splunk search. Please  help me how to fix it.