All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As @livehybrid mentioned, the general idea behind UFs and their management is the other way around - the deployment server distributes config items which are pulled and applied by the forwarders. Thi... See more...
As @livehybrid mentioned, the general idea behind UFs and their management is the other way around - the deployment server distributes config items which are pulled and applied by the forwarders. This way you should not backup your confogs centrally from your forwarders but the completely opposite - have the config centrally distributed. This way if your forwarder fails or when you need to reinstall it you can simply push the same piece of config again. I can understand that you might want to back up state of the forwarder (not the configs) so that if your forwarder breaks and you reinstall it you don't have to ingest all sources from scratch. But this is a fairly border case and there is no built-in mechanism for this. And there is definitely no built-in way to send the config or state of the forwarder to either the indexer(s) or the Deployment Server. Since you can run almost anything as a scripted input you could try to do your own duct-tape bound solution which would gather state files from the forwarder (not sure about reading open files though and consistency of their contents) and - for example - compress such archive and base64 it so that it can be indexed as a text event but that's a horrible idea.  
@NickEot  I dont think it's possible to directly back up a .conf file from a Heavy Forwarder to a specific folder on your Splunk Enterprise server using port 9997 or 8089. Best method is to use sta... See more...
@NickEot  I dont think it's possible to directly back up a .conf file from a Heavy Forwarder to a specific folder on your Splunk Enterprise server using port 9997 or 8089. Best method is to use standard file transfer methods(SCP,SFTP...) Workaround is to create a script/input on the Heavy Forwarder that reads the contents of the .conf file and sends it as a log event to the Splunk Enterprise server. On the Enterprise server, you could then use a Splunk alert or script to reconstruct the file from the received event and save it. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@jessieb_83  Before adding event data to your UBA, make sure you have already integrated your HR, Asset, and Identity data. The error eventHasNoEntities occurs when your data lacks entity-related CI... See more...
@jessieb_83  Before adding event data to your UBA, make sure you have already integrated your HR, Asset, and Identity data. The error eventHasNoEntities occurs when your data lacks entity-related CIM-compliant fields or values. Ensure that, Your UBA contains Asset and Identity data before importing any event data. Your event data is CIM-compliant and includes the necessary entity data fields. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
I'm closing the computation in a finally block.  I tried adding up to a 10-second delay but it still fails. In Postman and curl, the stream looks complete.  However when I enable HttpClient wire log... See more...
I'm closing the computation in a finally block.  I tried adding up to a 10-second delay but it still fails. In Postman and curl, the stream looks complete.  However when I enable HttpClient wire logging, it looks wrong: 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "event: control-message[\n]" 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "data: {[\n]" 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "data: "event" : "END_OF_CHANNEL",[\n]" 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "data: "timestampMs" : 1751322520099[\n]" 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "data: }[\n]" 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "[\n]" There should be a crlf, followed by a 0 for the end of the chunked stream, followed by another crlf. I looked at your source code.  You have a lot of close() calls in there.  Is it possible the underlying stream is getting closed before HttpClient gets a chance to read the end of the stream?  The only close() I'm doing is on the computation. thanks  
The recommended way of receiving Netscaler events is by Splunk's own addon https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/About You might want to try to use the same sourcetyp... See more...
The recommended way of receiving Netscaler events is by Splunk's own addon https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/About You might want to try to use the same sourcetype but I have no idea what the format will be if you do it according to Netscaler's docs.
Having some issues when looking at docker hec logs. The data is showing two sources at the same time, but does not filter on stderr or stdout when using source=stderr. { [-] line: clusterrolebind... See more...
Having some issues when looking at docker hec logs. The data is showing two sources at the same time, but does not filter on stderr or stdout when using source=stderr. { [-] line: clusterrolebinding.rbac.authorization.k8s.io/ucp-kube-system:calico-node:crds unchanged source: stdout tag: $ucp-kubectl - 9382ee9db872 } Show as raw text host = omit    source = http:syslog source = stdout  sourcetype = dnrc:docker
Greetings! I lead the development for 3 interactive React/SUIT apps, and before I go down a rabbit trail testing a few ideas, I wondered if anyone had already found something that might suit my requ... See more...
Greetings! I lead the development for 3 interactive React/SUIT apps, and before I go down a rabbit trail testing a few ideas, I wondered if anyone had already found something that might suit my requirement. Essentially, when promoting a new version (with some added feature(s), etc.) to Production, users have to open their browser's Dev tools, long-click on the browser's Reload button, and select "Empty Cache and Hard Reload". Understandably, they do not like having to take this step. I have some ideas around incrementing file names to avoid this, but just thought I'd check here to see if anyone else had already come up with a method. Thanks!
The current Netscaler guidance is that logs should be exported via HEC. However, it seems like the app doesn't have a sourcetype for HEC. Any guidance on that? https://docs.netscaler.com/en-us/citri... See more...
The current Netscaler guidance is that logs should be exported via HEC. However, it seems like the app doesn't have a sourcetype for HEC. Any guidance on that? https://docs.netscaler.com/en-us/citrix-adc/current-release/observability/auditlogs-splunk-integration.html
Any luck with this?  I'm hitting the same.  Far as I can tell it's a generic way to say that UBA is missing some key point of data, but I can't tell which thing it's looking for. 
Hi! No, I did not. Does that mean I could then use yield? I have a hard way to find good examples for me use case. //Peter
Regarding the critical vulnerabilities in the AppDynamics dependencies that were flagged by GitHub, our security team has provided the following justification: "Recently AppDynamics was made aware of... See more...
Regarding the critical vulnerabilities in the AppDynamics dependencies that were flagged by GitHub, our security team has provided the following justification: "Recently AppDynamics was made aware of alerts about malicious software in the AppDynamics Node.js Agent (https://www.npmjs.com/package/appdynamics). The following components are being flagged by NPM tooling. appdynamics-libagent-napi - https://github.com/advisories/GHSA-j9p6-9m64-6w55 appdynamics-native - https://github.com/advisories/GHSA-wgg3-rjwp-5qp2 appdynamics-protobuf - https://github.com/advisories/GHSA-5fmf-f797-r9p5 AppDynamics uses these component names internally and embeds them inside the official AppDynamics Node.js Agent. AppDynamics does not publish these components to npmjs.com. A malicious actor uploaded malicious code using the names of these packages., this is known as a NPM look-a-like attack. An "npm repository look-a-like attack" refers to a malicious tactic where an attacker creates a package on the npm registry with a name that closely resembles a legitimate, popular package, aiming to trick developers into installing the malicious version instead, potentially allowing them to inject harmful code into a project by exploiting the trust placed in the seemingly familiar package name. The NPM tool only does a name comparison only and does not have the ability to distinguish where a package is sourced from. This can lead to the confusion and make it appear the AppDynamics Node.js Agent is compromised. Please understand this is a false positive. While the AppDynamics Node.js Agent does use these component names internally. The agent does not reach out to npmjs.com to retrieve the packages. Thus when using the official AppDynamics NPM agent you will not be infected with a malicious piece of software."   If you still have any questions, please feel free to create a support case for further assistance https://mycase.cloudapps.cisco.com/case
@DufferDave  Please have a look  https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Installing_and_upgrading_to_Splunk_Enterprise_Security_8x  https://help.splunk.com/en/splunk-en... See more...
@DufferDave  Please have a look  https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Installing_and_upgrading_to_Splunk_Enterprise_Security_8x  https://help.splunk.com/en/splunk-enterprise-security-8/user-guide/8.0/analytics/risk-analysis 
We recently updated from Enterprise Security 7.3.2 to 8.0.4      Correlation searches are not updating the risk index.  I can write directly to the risk index, however any "correlation search" (now ... See more...
We recently updated from Enterprise Security 7.3.2 to 8.0.4      Correlation searches are not updating the risk index.  I can write directly to the risk index, however any "correlation search" (now finding) that is configured to perform risk analysis and has the risk object defined, does not update the risk index.    
Hi @Schroeder  In your restmap.conf have you set stream=true? I'm not sure if this is the only thing you would need to change but it could be a good starting point! https://help.splunk.com/en/splun... See more...
Hi @Schroeder  In your restmap.conf have you set stream=true? I'm not sure if this is the only thing you would need to change but it could be a good starting point! https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/configuration-file-reference/9.4.3-configuration-file-reference/restmap.conf#:~:text=persist%22.%0A*%20Default%3A%20false-,stream%20%3D%20%3Cboolean%3E,-*%20Describes%20whether%20or stream = <boolean> * Describes whether or not splunkd sends the payload in the request to the driver in a streaming fashion. * A value of "true" means splunkd sends the payload in the request to the driver in a stream, or multiple sequential requests. * A value of "false" means splunkd sends the payload in the request to the driver as a field of the original request. * Only has effect if the 'scripttype' setting is set to "persist". * Default: false  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, how can I reply to the sending endpoint not using return as I want to keep the connection open? My endpoint works as long as I respond via the return through the appserver but I do not want to ... See more...
Hi, how can I reply to the sending endpoint not using return as I want to keep the connection open? My endpoint works as long as I respond via the return through the appserver but I do not want to close the connection. I tried using yield with json.dumps but then the appserver throws a serialization error. Doing this async it also something the appserver does not like. How can I do something like: response {'a':'1'} ... do something return {'a':'2'} Sample, this works: return {'payload': '','status': 200}   and this does not: yield jsons.dumps({'payload': '','status': 200})   Thanks Peter
You are thinking about this "backwards". These fields aren't "added" to your events in the sense that they are not stored anywhere additionally and taking up space and license. They are dynamically e... See more...
You are thinking about this "backwards". These fields aren't "added" to your events in the sense that they are not stored anywhere additionally and taking up space and license. They are dynamically extracted and/or calculated when you are searching your data. If you are searching in verbose or smart mode, all possible fields will be extracted (unless you explicitly limit them with the "fields" or "table" command. But if you search in fast mode (which is the prefered approach because then searches run faster) only the fields you explicitly specify will be extracted. So you don't need to "remove" the fields. If you don't want to see them when searching in smart or verbose mode you can add | fields - whatever_fields you want_removed And you won't see them. @malix_la_harpe 's answer while techinically not wrong is not the way to go.
If you absolutely want to remove some fields, as they are defined by the add-on, you have two choices : - Edit the add-on by removing the corresponding lines in props.conf in the add-on folder, but ... See more...
If you absolutely want to remove some fields, as they are defined by the add-on, you have two choices : - Edit the add-on by removing the corresponding lines in props.conf in the add-on folder, but you will have to patch that at each update of the add-on, that's generally not a good idea - Add a custom eval for each field you want to delete, by defining a new stanza in your local props.conf : [your_sourcetype] EVAL-your_fieldname=null() 
Hello @PiotrAp , You can try something like that : | makeresults | eval event_id=1000, username="test", Computer="xx1", _time=strptime("2025-06-30 16:26:27.01", "%Y-%m-%d %H:%M:%S.%N"), resou... See more...
Hello @PiotrAp , You can try something like that : | makeresults | eval event_id=1000, username="test", Computer="xx1", _time=strptime("2025-06-30 16:26:27.01", "%Y-%m-%d %H:%M:%S.%N"), resource="example1" | append [| makeresults | eval event_id=1000, username="test", Computer="xx2", _time=strptime("2025-06-30 16:26:27.02", "%Y-%m-%d %H:%M:%S.%N"), resource="example2"] | append [| makeresults | eval event_id=1001, username="test", _time=strptime("2025-06-30 16:26:27.03", "%Y-%m-%d %H:%M:%S.%N"), resource="example3"] | append [| makeresults | eval event_id=1000, username="truc", Computer="yyy", _time=strptime("2025-06-30 16:26:29", "%Y-%m-%d %H:%M:%S"), resource="example2"] | append [| makeresults | eval event_id=1001, username="truc", Computer="yyy", _time=strptime("2025-06-30 16:26:32", "%Y-%m-%d %H:%M:%S"), resource="example3"] | sort _time | streamstats time_window=1s last(event_id) AS current_event_id, last(eval(if(event_id=1000,event_id,null()))) AS previous_event_id, last(eval(if(event_id=1000,_time,null()))) AS previous_time, last(eval(if(event_id=1000,Computer,null()))) as previous_computer, last(resource) AS current_resource by username | eval status = if(current_event_id=1001 and previous_event_id=1000,"SUCCESS","FAILURE") (The makeresults lines are here to generate some data to test the query) In the results, you can see that the "success" status has the time & the computer of the previous event 1000 in the "previous_time" and "previous_computer" fields, and the resource of the event 1001 in the current_resource field.  (I handled the case you have multiple 1000 event before the 1001, we want to keep only the fields of the last 1000 event) The user "truc" doesn't have a success event because the 2 events aren't in a 1s time window. If you run this query you will see the results like : Computer _time current_event_id current_resource event_id previous_computer previous_event_id previous_time resource status username                       xx1 2025-06-30 16:26:27.010 1000 example1 1000 xx1 1000 1751293587.010000 example1 FAILURE test xx2 2025-06-30 16:26:27.020 1000 example2 1000 xx2 1000 1751293587.020000 example2 FAILURE test   2025-06-30 16:26:27.030 1001 example3 1001 xx2 1000 1751293587.020000 example3 SUCCESS test yyy 2025-06-30 16:26:29.000 1000 example2 1000 yyy 1000 1751293589.000000 example2 FAILURE truc yyy 2025-06-30 16:26:32.000 1001 example3 1001       example3 FAILURE truc Does that answer your question?
"Sorry for the question, but is it possible to disable fields that we don't need for a specific sourcetype or index?"
We are experiencing consistent log duplication and data loss when the Splunk Universal Forwarder (UF) running as a Helm deployment inside our EKS cluster is restarted or redeployed. Environment Deta... See more...
We are experiencing consistent log duplication and data loss when the Splunk Universal Forwarder (UF) running as a Helm deployment inside our EKS cluster is restarted or redeployed. Environment Details: Platform: AWS EKS (Kubernetes) UF Deployment: Helm chart Splunk UF Version: 9.1.2 Indexers: Splunk Enterprise 9.1.1 (self-managed) Source Logs: Kubernetes container logs (/var/log/containers, etc.)   Symptoms: After UF pod restarts/re-deployed: Previously ingested logs are duplicated. Logs that were generated during the restart window are missing(not all logs) in Splunk. The fishbucket is recreated at each restart: Confirmed by logging into the UF pod post-restart and checking: /opt/splunkforwarder/var/lib/splunk/fishbucket/ Timestamps indicate it is freshly recreated (ephemeral).   Our Hypothesis: We suspect this behavior is caused by the Splunk UF losing its ingestion state (fishbucket) on pod restart, due to the lack of a PersistentVolumeClaim (PVC) mounted to: /opt/splunkforwarder/var/lib/splunk   This would explain both: Re-ingestion of previously-read files (-> duplicates) Fail to re-ingest certain logs that may no longer be available or tracked (-> causing data loss) However, we are not yet certain if the missing logs are due to non-persistent fishbucket and container log rotation What We Need from Splunk Support: How can we conclusively verify whether the missing logs are caused by fishbucket loss, file rotation, inode mismatch, or other ingestion tracking issues? What is the recommended and supported approach for maintaining ingestion state in a Kubernetes/Helm-based Splunk UF deployment? Is mounting a PersistentVolumeClaim (PVC) to /opt/splunkforwarder/var/lib/splunk sufficient and reliable for preserving fishbucket across pod restarts? Are there additional best practices to prevent both log loss and duplication, especially in dynamic environments like Kubernetes?