All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I’ve already released a Splunk Add-on with one input and user configurations. Now, I’ve added a new input and made UI changes in the Configuration page. I want to simulate a customer upgrade locally... See more...
I’ve already released a Splunk Add-on with one input and user configurations. Now, I’ve added a new input and made UI changes in the Configuration page. I want to simulate a customer upgrade locally by: Installing the existing released version Adding sample configurations Upgrading it with the new version Testing if existing settings are retained and the new input/UI works without issues Could you guide me on: Best practices for local upgrade testing Ensuring configurations persist after upgrade Any tools or logs to verify migration behavior
Hi @apc  It sounds like you need to increment the "build" value in the [install] stanza of app.conf: build = <integer> * Required. * Must be a positive integer. * Increment this whenever you change... See more...
Hi @apc  It sounds like you need to increment the "build" value in the [install] stanza of app.conf: build = <integer> * Required. * Must be a positive integer. * Increment this whenever you change files in <app_name>/static. * Every release must change both 'version' and 'build' settings. * Ensures browsers don't use cached copies of old static files in new versions of your app. * 'build' is a single integer, unlike 'version' which can be a complex string, such as 1.5.18.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Schroeder  I've been trying to get this to work but unfortunately not managed to. It looks like this is designed for streamed payload being sent into the persistconn/appserver.py rather than as ... See more...
Hi @Schroeder  I've been trying to get this to work but unfortunately not managed to. It looks like this is designed for streamed payload being sent into the persistconn/appserver.py rather than as a response to the browser. Please let us know if you do manage to get anywhere with this though - Perhaps its possible with a custom driver = value, although there arent many examples of this online!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @MrGlass  You are seeing source twice because it is an internal field as well as being specified inside your event. This can cause problems when searching for it because it has two values. You m... See more...
Hi @MrGlass  You are seeing source twice because it is an internal field as well as being specified inside your event. This can cause problems when searching for it because it has two values. You might find that adding a TERM statement is enough to filter this down in order to retain performence, rather than having to search all your data and then filter by source once all the events are loaded: index=YourIndex sourcetype=dnrc:docker TERM(stderr)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Source is one of the default metadata fields which are supposed to be indexed along the event, not included in the event. Therefore the initial search does not look for the fields parsed out from the... See more...
Source is one of the default metadata fields which are supposed to be indexed along the event, not included in the event. Therefore the initial search does not look for the fields parsed out from the event itself when looking for fields like source or sourcetype. As a walkaround I'd try to instead of <rest of your search> source=stderr do <rest of your search> stderr | where source="stderr"
As @livehybrid mentioned, the general idea behind UFs and their management is the other way around - the deployment server distributes config items which are pulled and applied by the forwarders. Thi... See more...
As @livehybrid mentioned, the general idea behind UFs and their management is the other way around - the deployment server distributes config items which are pulled and applied by the forwarders. This way you should not backup your confogs centrally from your forwarders but the completely opposite - have the config centrally distributed. This way if your forwarder fails or when you need to reinstall it you can simply push the same piece of config again. I can understand that you might want to back up state of the forwarder (not the configs) so that if your forwarder breaks and you reinstall it you don't have to ingest all sources from scratch. But this is a fairly border case and there is no built-in mechanism for this. And there is definitely no built-in way to send the config or state of the forwarder to either the indexer(s) or the Deployment Server. Since you can run almost anything as a scripted input you could try to do your own duct-tape bound solution which would gather state files from the forwarder (not sure about reading open files though and consistency of their contents) and - for example - compress such archive and base64 it so that it can be indexed as a text event but that's a horrible idea.  
@NickEot  I dont think it's possible to directly back up a .conf file from a Heavy Forwarder to a specific folder on your Splunk Enterprise server using port 9997 or 8089. Best method is to use sta... See more...
@NickEot  I dont think it's possible to directly back up a .conf file from a Heavy Forwarder to a specific folder on your Splunk Enterprise server using port 9997 or 8089. Best method is to use standard file transfer methods(SCP,SFTP...) Workaround is to create a script/input on the Heavy Forwarder that reads the contents of the .conf file and sends it as a log event to the Splunk Enterprise server. On the Enterprise server, you could then use a Splunk alert or script to reconstruct the file from the received event and save it. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@jessieb_83  Before adding event data to your UBA, make sure you have already integrated your HR, Asset, and Identity data. The error eventHasNoEntities occurs when your data lacks entity-related CI... See more...
@jessieb_83  Before adding event data to your UBA, make sure you have already integrated your HR, Asset, and Identity data. The error eventHasNoEntities occurs when your data lacks entity-related CIM-compliant fields or values. Ensure that, Your UBA contains Asset and Identity data before importing any event data. Your event data is CIM-compliant and includes the necessary entity data fields. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
I'm closing the computation in a finally block.  I tried adding up to a 10-second delay but it still fails. In Postman and curl, the stream looks complete.  However when I enable HttpClient wire log... See more...
I'm closing the computation in a finally block.  I tried adding up to a 10-second delay but it still fails. In Postman and curl, the stream looks complete.  However when I enable HttpClient wire logging, it looks wrong: 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "event: control-message[\n]" 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "data: {[\n]" 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "data: "event" : "END_OF_CHANNEL",[\n]" 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "data: "timestampMs" : 1751322520099[\n]" 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "data: }[\n]" 2025-06-30 17:28:40.599 CDT | FINE | org.apache.http.wire | http-outgoing-4 << "[\n]" There should be a crlf, followed by a 0 for the end of the chunked stream, followed by another crlf. I looked at your source code.  You have a lot of close() calls in there.  Is it possible the underlying stream is getting closed before HttpClient gets a chance to read the end of the stream?  The only close() I'm doing is on the computation. thanks  
The recommended way of receiving Netscaler events is by Splunk's own addon https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/About You might want to try to use the same sourcetyp... See more...
The recommended way of receiving Netscaler events is by Splunk's own addon https://docs.splunk.com/Documentation/AddOns/released/CitrixNetScaler/About You might want to try to use the same sourcetype but I have no idea what the format will be if you do it according to Netscaler's docs.
Having some issues when looking at docker hec logs. The data is showing two sources at the same time, but does not filter on stderr or stdout when using source=stderr. { [-] line: clusterrolebind... See more...
Having some issues when looking at docker hec logs. The data is showing two sources at the same time, but does not filter on stderr or stdout when using source=stderr. { [-] line: clusterrolebinding.rbac.authorization.k8s.io/ucp-kube-system:calico-node:crds unchanged source: stdout tag: $ucp-kubectl - 9382ee9db872 } Show as raw text host = omit    source = http:syslog source = stdout  sourcetype = dnrc:docker
Greetings! I lead the development for 3 interactive React/SUIT apps, and before I go down a rabbit trail testing a few ideas, I wondered if anyone had already found something that might suit my requ... See more...
Greetings! I lead the development for 3 interactive React/SUIT apps, and before I go down a rabbit trail testing a few ideas, I wondered if anyone had already found something that might suit my requirement. Essentially, when promoting a new version (with some added feature(s), etc.) to Production, users have to open their browser's Dev tools, long-click on the browser's Reload button, and select "Empty Cache and Hard Reload". Understandably, they do not like having to take this step. I have some ideas around incrementing file names to avoid this, but just thought I'd check here to see if anyone else had already come up with a method. Thanks!
The current Netscaler guidance is that logs should be exported via HEC. However, it seems like the app doesn't have a sourcetype for HEC. Any guidance on that? https://docs.netscaler.com/en-us/citri... See more...
The current Netscaler guidance is that logs should be exported via HEC. However, it seems like the app doesn't have a sourcetype for HEC. Any guidance on that? https://docs.netscaler.com/en-us/citrix-adc/current-release/observability/auditlogs-splunk-integration.html
Any luck with this?  I'm hitting the same.  Far as I can tell it's a generic way to say that UBA is missing some key point of data, but I can't tell which thing it's looking for. 
Hi! No, I did not. Does that mean I could then use yield? I have a hard way to find good examples for me use case. //Peter
Regarding the critical vulnerabilities in the AppDynamics dependencies that were flagged by GitHub, our security team has provided the following justification: "Recently AppDynamics was made aware of... See more...
Regarding the critical vulnerabilities in the AppDynamics dependencies that were flagged by GitHub, our security team has provided the following justification: "Recently AppDynamics was made aware of alerts about malicious software in the AppDynamics Node.js Agent (https://www.npmjs.com/package/appdynamics). The following components are being flagged by NPM tooling. appdynamics-libagent-napi - https://github.com/advisories/GHSA-j9p6-9m64-6w55 appdynamics-native - https://github.com/advisories/GHSA-wgg3-rjwp-5qp2 appdynamics-protobuf - https://github.com/advisories/GHSA-5fmf-f797-r9p5 AppDynamics uses these component names internally and embeds them inside the official AppDynamics Node.js Agent. AppDynamics does not publish these components to npmjs.com. A malicious actor uploaded malicious code using the names of these packages., this is known as a NPM look-a-like attack. An "npm repository look-a-like attack" refers to a malicious tactic where an attacker creates a package on the npm registry with a name that closely resembles a legitimate, popular package, aiming to trick developers into installing the malicious version instead, potentially allowing them to inject harmful code into a project by exploiting the trust placed in the seemingly familiar package name. The NPM tool only does a name comparison only and does not have the ability to distinguish where a package is sourced from. This can lead to the confusion and make it appear the AppDynamics Node.js Agent is compromised. Please understand this is a false positive. While the AppDynamics Node.js Agent does use these component names internally. The agent does not reach out to npmjs.com to retrieve the packages. Thus when using the official AppDynamics NPM agent you will not be infected with a malicious piece of software."   If you still have any questions, please feel free to create a support case for further assistance https://mycase.cloudapps.cisco.com/case
@DufferDave  Please have a look  https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Installing_and_upgrading_to_Splunk_Enterprise_Security_8x  https://help.splunk.com/en/splunk-en... See more...
@DufferDave  Please have a look  https://lantern.splunk.com/Security/Product_Tips/Enterprise_Security/Installing_and_upgrading_to_Splunk_Enterprise_Security_8x  https://help.splunk.com/en/splunk-enterprise-security-8/user-guide/8.0/analytics/risk-analysis 
We recently updated from Enterprise Security 7.3.2 to 8.0.4      Correlation searches are not updating the risk index.  I can write directly to the risk index, however any "correlation search" (now ... See more...
We recently updated from Enterprise Security 7.3.2 to 8.0.4      Correlation searches are not updating the risk index.  I can write directly to the risk index, however any "correlation search" (now finding) that is configured to perform risk analysis and has the risk object defined, does not update the risk index.    
Hi @Schroeder  In your restmap.conf have you set stream=true? I'm not sure if this is the only thing you would need to change but it could be a good starting point! https://help.splunk.com/en/splun... See more...
Hi @Schroeder  In your restmap.conf have you set stream=true? I'm not sure if this is the only thing you would need to change but it could be a good starting point! https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/configuration-file-reference/9.4.3-configuration-file-reference/restmap.conf#:~:text=persist%22.%0A*%20Default%3A%20false-,stream%20%3D%20%3Cboolean%3E,-*%20Describes%20whether%20or stream = <boolean> * Describes whether or not splunkd sends the payload in the request to the driver in a streaming fashion. * A value of "true" means splunkd sends the payload in the request to the driver in a stream, or multiple sequential requests. * A value of "false" means splunkd sends the payload in the request to the driver as a field of the original request. * Only has effect if the 'scripttype' setting is set to "persist". * Default: false  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, how can I reply to the sending endpoint not using return as I want to keep the connection open? My endpoint works as long as I respond via the return through the appserver but I do not want to ... See more...
Hi, how can I reply to the sending endpoint not using return as I want to keep the connection open? My endpoint works as long as I respond via the return through the appserver but I do not want to close the connection. I tried using yield with json.dumps but then the appserver throws a serialization error. Doing this async it also something the appserver does not like. How can I do something like: response {'a':'1'} ... do something return {'a':'2'} Sample, this works: return {'payload': '','status': 200}   and this does not: yield jsons.dumps({'payload': '','status': 200})   Thanks Peter