All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We recently updated from Enterprise Security 7.3.2 to 8.0.4      Correlation searches are not updating the risk index.  I can write directly to the risk index, however any "correlation search" (now ... See more...
We recently updated from Enterprise Security 7.3.2 to 8.0.4      Correlation searches are not updating the risk index.  I can write directly to the risk index, however any "correlation search" (now finding) that is configured to perform risk analysis and has the risk object defined, does not update the risk index.    
Hi @Schroeder  In your restmap.conf have you set stream=true? I'm not sure if this is the only thing you would need to change but it could be a good starting point! https://help.splunk.com/en/splun... See more...
Hi @Schroeder  In your restmap.conf have you set stream=true? I'm not sure if this is the only thing you would need to change but it could be a good starting point! https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/configuration-file-reference/9.4.3-configuration-file-reference/restmap.conf#:~:text=persist%22.%0A*%20Default%3A%20false-,stream%20%3D%20%3Cboolean%3E,-*%20Describes%20whether%20or stream = <boolean> * Describes whether or not splunkd sends the payload in the request to the driver in a streaming fashion. * A value of "true" means splunkd sends the payload in the request to the driver in a stream, or multiple sequential requests. * A value of "false" means splunkd sends the payload in the request to the driver as a field of the original request. * Only has effect if the 'scripttype' setting is set to "persist". * Default: false  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, how can I reply to the sending endpoint not using return as I want to keep the connection open? My endpoint works as long as I respond via the return through the appserver but I do not want to ... See more...
Hi, how can I reply to the sending endpoint not using return as I want to keep the connection open? My endpoint works as long as I respond via the return through the appserver but I do not want to close the connection. I tried using yield with json.dumps but then the appserver throws a serialization error. Doing this async it also something the appserver does not like. How can I do something like: response {'a':'1'} ... do something return {'a':'2'} Sample, this works: return {'payload': '','status': 200}   and this does not: yield jsons.dumps({'payload': '','status': 200})   Thanks Peter
You are thinking about this "backwards". These fields aren't "added" to your events in the sense that they are not stored anywhere additionally and taking up space and license. They are dynamically e... See more...
You are thinking about this "backwards". These fields aren't "added" to your events in the sense that they are not stored anywhere additionally and taking up space and license. They are dynamically extracted and/or calculated when you are searching your data. If you are searching in verbose or smart mode, all possible fields will be extracted (unless you explicitly limit them with the "fields" or "table" command. But if you search in fast mode (which is the prefered approach because then searches run faster) only the fields you explicitly specify will be extracted. So you don't need to "remove" the fields. If you don't want to see them when searching in smart or verbose mode you can add | fields - whatever_fields you want_removed And you won't see them. @malix_la_harpe 's answer while techinically not wrong is not the way to go.
If you absolutely want to remove some fields, as they are defined by the add-on, you have two choices : - Edit the add-on by removing the corresponding lines in props.conf in the add-on folder, but ... See more...
If you absolutely want to remove some fields, as they are defined by the add-on, you have two choices : - Edit the add-on by removing the corresponding lines in props.conf in the add-on folder, but you will have to patch that at each update of the add-on, that's generally not a good idea - Add a custom eval for each field you want to delete, by defining a new stanza in your local props.conf : [your_sourcetype] EVAL-your_fieldname=null() 
Hello @PiotrAp , You can try something like that : | makeresults | eval event_id=1000, username="test", Computer="xx1", _time=strptime("2025-06-30 16:26:27.01", "%Y-%m-%d %H:%M:%S.%N"), resou... See more...
Hello @PiotrAp , You can try something like that : | makeresults | eval event_id=1000, username="test", Computer="xx1", _time=strptime("2025-06-30 16:26:27.01", "%Y-%m-%d %H:%M:%S.%N"), resource="example1" | append [| makeresults | eval event_id=1000, username="test", Computer="xx2", _time=strptime("2025-06-30 16:26:27.02", "%Y-%m-%d %H:%M:%S.%N"), resource="example2"] | append [| makeresults | eval event_id=1001, username="test", _time=strptime("2025-06-30 16:26:27.03", "%Y-%m-%d %H:%M:%S.%N"), resource="example3"] | append [| makeresults | eval event_id=1000, username="truc", Computer="yyy", _time=strptime("2025-06-30 16:26:29", "%Y-%m-%d %H:%M:%S"), resource="example2"] | append [| makeresults | eval event_id=1001, username="truc", Computer="yyy", _time=strptime("2025-06-30 16:26:32", "%Y-%m-%d %H:%M:%S"), resource="example3"] | sort _time | streamstats time_window=1s last(event_id) AS current_event_id, last(eval(if(event_id=1000,event_id,null()))) AS previous_event_id, last(eval(if(event_id=1000,_time,null()))) AS previous_time, last(eval(if(event_id=1000,Computer,null()))) as previous_computer, last(resource) AS current_resource by username | eval status = if(current_event_id=1001 and previous_event_id=1000,"SUCCESS","FAILURE") (The makeresults lines are here to generate some data to test the query) In the results, you can see that the "success" status has the time & the computer of the previous event 1000 in the "previous_time" and "previous_computer" fields, and the resource of the event 1001 in the current_resource field.  (I handled the case you have multiple 1000 event before the 1001, we want to keep only the fields of the last 1000 event) The user "truc" doesn't have a success event because the 2 events aren't in a 1s time window. If you run this query you will see the results like : Computer _time current_event_id current_resource event_id previous_computer previous_event_id previous_time resource status username                       xx1 2025-06-30 16:26:27.010 1000 example1 1000 xx1 1000 1751293587.010000 example1 FAILURE test xx2 2025-06-30 16:26:27.020 1000 example2 1000 xx2 1000 1751293587.020000 example2 FAILURE test   2025-06-30 16:26:27.030 1001 example3 1001 xx2 1000 1751293587.020000 example3 SUCCESS test yyy 2025-06-30 16:26:29.000 1000 example2 1000 yyy 1000 1751293589.000000 example2 FAILURE truc yyy 2025-06-30 16:26:32.000 1001 example3 1001       example3 FAILURE truc Does that answer your question?
"Sorry for the question, but is it possible to disable fields that we don't need for a specific sourcetype or index?"
We are experiencing consistent log duplication and data loss when the Splunk Universal Forwarder (UF) running as a Helm deployment inside our EKS cluster is restarted or redeployed. Environment Deta... See more...
We are experiencing consistent log duplication and data loss when the Splunk Universal Forwarder (UF) running as a Helm deployment inside our EKS cluster is restarted or redeployed. Environment Details: Platform: AWS EKS (Kubernetes) UF Deployment: Helm chart Splunk UF Version: 9.1.2 Indexers: Splunk Enterprise 9.1.1 (self-managed) Source Logs: Kubernetes container logs (/var/log/containers, etc.)   Symptoms: After UF pod restarts/re-deployed: Previously ingested logs are duplicated. Logs that were generated during the restart window are missing(not all logs) in Splunk. The fishbucket is recreated at each restart: Confirmed by logging into the UF pod post-restart and checking: /opt/splunkforwarder/var/lib/splunk/fishbucket/ Timestamps indicate it is freshly recreated (ephemeral).   Our Hypothesis: We suspect this behavior is caused by the Splunk UF losing its ingestion state (fishbucket) on pod restart, due to the lack of a PersistentVolumeClaim (PVC) mounted to: /opt/splunkforwarder/var/lib/splunk   This would explain both: Re-ingestion of previously-read files (-> duplicates) Fail to re-ingest certain logs that may no longer be available or tracked (-> causing data loss) However, we are not yet certain if the missing logs are due to non-persistent fishbucket and container log rotation What We Need from Splunk Support: How can we conclusively verify whether the missing logs are caused by fishbucket loss, file rotation, inode mismatch, or other ingestion tracking issues? What is the recommended and supported approach for maintaining ingestion state in a Kubernetes/Helm-based Splunk UF deployment? Is mounting a PersistentVolumeClaim (PVC) to /opt/splunkforwarder/var/lib/splunk sufficient and reliable for preserving fishbucket across pod restarts? Are there additional best practices to prevent both log loss and duplication, especially in dynamic environments like Kubernetes?
Hi @NickEot  Can I check - do you not deploy to your HF from a Deployment Server? Ideally HF would be relatively stateless and be rebuildable from a DS if it was to disappear and come back without c... See more...
Hi @NickEot  Can I check - do you not deploy to your HF from a Deployment Server? Ideally HF would be relatively stateless and be rebuildable from a DS if it was to disappear and come back without configuration, however if you're not in a position to do this then you would need to look at a custom app to collect the data you need and index it. I cant find it right now (but I'll keep looking) but I once created a simple app which ran btool and piped the output to a file. You could do something like this as a custom app which could then be configured to send the conf files you are interested in to a file which is configured to be monitored in inputs.conf and ingested to whichever index you need on your indexers.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Staff, we have a distributed systems with 1 Splunk enterprise and N Heavy forwarder pushing data to it. We would like to backup every night one .conf file inside the Heavy forwarder  directly in... See more...
Hi Staff, we have a distributed systems with 1 Splunk enterprise and N Heavy forwarder pushing data to it. We would like to backup every night one .conf file inside the Heavy forwarder  directly into a specific folder of the enterprise machine by using the same port 9997 or 8089 avoiding any other port configuration. Is this possible? How can we get the right solution? Thanks in advance. Nick  
Hi, I’m looking for query which helps me to find if login is successful or not. Unfortunately, there is no direct log which would show this, so I need to use following logic: If there is EventID 1... See more...
Hi, I’m looking for query which helps me to find if login is successful or not. Unfortunately, there is no direct log which would show this, so I need to use following logic: If there is EventID 1000, check if there is following EventID 1001 with the same filed called Username in time range of 1s If EventID with above conditions exist – Status=SUCCESSS If EventID with above conditions doesn’t exist – Status=FAILURE Disaply table with following fields with match both events: _Time of event 1000 Computer from event 1000 Status Resource from event 1001 Is it possible to get this in Splunk?
Hello @Kosyay , Those additionnal lowercase fields are defined by the Splunk_TA_Windows add-on to be compliant with Splunk CIM. You can find the list of the CIM fieldnames here : https://help.splu... See more...
Hello @Kosyay , Those additionnal lowercase fields are defined by the Splunk_TA_Windows add-on to be compliant with Splunk CIM. You can find the list of the CIM fieldnames here : https://help.splunk.com/en/splunk-cloud-platform/common-information-model/6.0/data-models/cim-fields-per-associated-data-model The CIM helps you to normalize your data to match a common standard, using the same field names and event tags for equivalent events from different sources or vendors. So I'm not sure you want to remove those fields If you want to see where those fields are defined, open the add-on archive, and you will see the alias and eval definitions in the file "Splunk_TA_windows\default\props.conf"
In general, it should be.  That said, make sure you don't try to send events via HEC to an undefined index in Splunk Cloud.  You will likely lose any events after the one destined for the undefined i... See more...
In general, it should be.  That said, make sure you don't try to send events via HEC to an undefined index in Splunk Cloud.  You will likely lose any events after the one destined for the undefined index in a given HEC batch.
Hello. We ended up choosing the Splunk_TA_Windows add-on, but some questions came up. During parsing, some field names are in uppercase (e.g., Target_User_Name, WorkstationName, Source_Workstation), ... See more...
Hello. We ended up choosing the Splunk_TA_Windows add-on, but some questions came up. During parsing, some field names are in uppercase (e.g., Target_User_Name, WorkstationName, Source_Workstation), while others are in lowercase (e.g., user, dvc_nt_host, dvc, src_user). Could someone please explain: Why are these additional lowercase fields needed? Is it possible to hide or remove them? Thanks in advance!
I need to disagree to use any x.0.0 versions. Especially when dbx’s 4.0.0 contains totally new features (real HA), it’s more than probably that there will be less or more other annoying issues with it!
The port 8089 is used only for rest api requested a responses, not for sending logs! You need separate port for those like 9997 is in normal situation. It doesn’t mater what it is. Only ensure that it... See more...
The port 8089 is used only for rest api requested a responses, not for sending logs! You need separate port for those like 9997 is in normal situation. It doesn’t mater what it is. Only ensure that it’s allowed in all FWs between SH and indexers. When you are flipping the port to a XXXX or 9998 then indexer discovery tells SH that there is a new receiver port activated and SH should use also it and remove previous 9997. If there is e.g. FW blocking traffic from SH to indexers for those new ports then SH can’t work as expected and , I expect, later when it lost access to its current LM logs there will start those other issues which you have mentioned? You should find some hints from your instances internal logs if this is really what has happened.
When we set up a cluster, the SH, CM and the indexers stay connected over the management port 8089 and will keep sending _internal logs no matter what, but the forwarders use the inputs port 9997 to ... See more...
When we set up a cluster, the SH, CM and the indexers stay connected over the management port 8089 and will keep sending _internal logs no matter what, but the forwarders use the inputs port 9997 to send data to the indexers. In our case, we only flip the port to XXXX or 9998, depending on the type of forwarding setup used. We have controlled data ingestion and always stay within limits, but sometimes unexpected testing causes a high input flow, and thus, we have to take measures to make sure we don't breach the license.
ok my license was expired. So probably the problem
Hello fellow ES 8.X enjoyer. We have a few Splunk Cloud customer that got upgrade to ES 8.1. We have noticed that all the drill down searches from Mission Control use the time rage "All time", event... See more...
Hello fellow ES 8.X enjoyer. We have a few Splunk Cloud customer that got upgrade to ES 8.1. We have noticed that all the drill down searches from Mission Control use the time rage "All time", eventhough we configured the earliest and latest offset with $info_min_time$ and $info_max_time$: After saving the search again the problem vanished. I also created a new search and worked correct immediately. It worked before the update for the existing searches and stopped working after the upgrade.  Anybody else with the same experience?  Best regards  
Has it ever worked and now suddenly stopped? (If so - what changes were made to your environment?) Or is it a new installation? (How exactly did you install it?) Do the logs show any related errors?