All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I'm new to splunk and ive been working on some labs for practice. Anyway I'm working on this lab set from this repo (https://github.com/0xrajneesh/Splunk-Projects-For-Beginners/blob/main/project%237-... See more...
I'm new to splunk and ive been working on some labs for practice. Anyway I'm working on this lab set from this repo (https://github.com/0xrajneesh/Splunk-Projects-For-Beginners/blob/main/project%237-analyzing-dhcp-logs-using-splunk-siem.md) and for some reason whenever I try to upload the log files in "add data" it keeps timing out. I was initially able to do the DNS logs but now I can't do anything. Is it because im on a free trial? Can someone else try and let me know if you are having the same problem?   
Hi everyone, I’m building a small test lab that intentionally includes a Windows 7 SP1 (x64) endpoint. So I really need a forwarder that works with windows 7. The current Previous Releases Univers... See more...
Hi everyone, I’m building a small test lab that intentionally includes a Windows 7 SP1 (x64) endpoint. So I really need a forwarder that works with windows 7. The current Previous Releases Universal Forwarder page lists 9.x compatible with windows 10, 11.  I know windows 7 is ancient at this point but it would really help me if there's an official archive, mirror, or authenticated link where I can still pull that legacy MSI? Any pointer or working link would be greatly appreciated. Thanks!
Splunk sourcetype=access_combined.   What would the splunk query look like to get an hourly trellis of piecharts by http_status?
This product was released back on 2023: https://community.splunk.com/t5/Product-News-Announcements/Observability-Cloud-Splunk-Distribution-of-the-OpenTelemetry/ba-p/672091 I'm using it successfull... See more...
This product was released back on 2023: https://community.splunk.com/t5/Product-News-Announcements/Observability-Cloud-Splunk-Distribution-of-the-OpenTelemetry/ba-p/672091 I'm using it successfully, however, it seems like this is not begin maintained.  No new versions of the Add-on have been released to keep up with the changes in the helm chart. I was able to successfully update from the default image on this version (0.86.0) to latest (0.127.0) however, the EKS Add-on creates the config map that is mounted to the agents with some deprecated values that are no longer valid for the latest version of the image. Is there any intent on maintaining this EKS Add-on? or is the recommendation to migrate to the helm chart? (https://github.com/signalfx/splunk-otel-collector-chart)
My SignalFlow queries consistently end with "org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk." My code is similar to the example here: https://github.com/signalfx/signal... See more...
My SignalFlow queries consistently end with "org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk." My code is similar to the example here: https://github.com/signalfx/signalflow-client-java I create the transport and client, then go in a loop an execute the same query once per iteration with an updated start time each time.  I read all the messages in the iterator, though I ignore some types.  I close the computation at the end of each iteration. The query seems to work fine.  I get the data I expect. The stack trace looks like this: Jun 27, 2025 4:33:16 PM com.signalfx.signalflow.client.ServerSentEventsTransport$TransportEventStreamParser close SEVERE: failed to close event stream org.apache.http.MalformedChunkCodingException: CRLF expected at end of chunk at org.apache.http.impl.io.ChunkedInputStream.getChunkSize(ChunkedInputStream.java:250) at org.apache.http.impl.io.ChunkedInputStream.nextChunk(ChunkedInputStream.java:222) at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:183) at org.apache.http.impl.io.ChunkedInputStream.read(ChunkedInputStream.java:210) at org.apache.http.impl.io.ChunkedInputStream.close(ChunkedInputStream.java:312) at org.apache.http.impl.execchain.ResponseEntityProxy.streamClosed(ResponseEntityProxy.java:142) at org.apache.http.conn.EofSensorInputStream.checkClose(EofSensorInputStream.java:228) at org.apache.http.conn.EofSensorInputStream.close(EofSensorInputStream.java:172) at java.base/sun.nio.cs.StreamDecoder.implClose(StreamDecoder.java:377) at java.base/sun.nio.cs.StreamDecoder.close(StreamDecoder.java:205) at java.base/java.io.InputStreamReader.close(InputStreamReader.java:192) at java.base/java.io.BufferedReader.close(BufferedReader.java:525) at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportEventStreamParser.close(ServerSentEventsTransport.java:476) at com.signalfx.signalflow.client.ServerSentEventsTransport$TransportChannel.close(ServerSentEventsTransport.java:396) at com.signalfx.signalflow.client.Computation.close(Computation.java:168) my code here  Should I be doing something different? thanks
Are these fields mutually exclusive? I'm not sure about the relation between these four fields.
I'm having Developer License but I'm unable to download the ES. Can any one help me in this.?
I have a unique problem regarding SNMP and SPLUNK ITSI.First My VNF node was forwarding SNMP traps to SNMP target via SNMPv3 That target supports SNMP auto discovery so I don't had to manually config... See more...
I have a unique problem regarding SNMP and SPLUNK ITSI.First My VNF node was forwarding SNMP traps to SNMP target via SNMPv3 That target supports SNMP auto discovery so I don't had to manually configure ENGINID later I got the option of integrating my Node to SPLUNK ITSI and SC4SNMP whichi I did but intitially they didn't support EnginID auto discovery then I had Manually run the SNMPGET and provided the Engine ID for them.Now I am started sending my trap towards both the nodes ith same OID and ENgine ID.But My alarms are not getting to splunk index even though we will be able it capture it in the port of SC4SNMP.Later I found out that SPLUNkK ITST getting toe Same alarm same oid forwarded from the previous target.But this time target is using SNMPV2 and it sending as a community with a community string with few OIDs bundled together.Can this be the issue where my Nodes origina trap is not reaching the correct index?
Hi, I need to upgrade Splunk v.8.2.2.1 on RHEL 7.6 to Splunk v.9.4 on RHEL 9.6. I saw that Splunk 8.2 does not support RHEL 9.6 version and the customer cannot upgrade to RHEL 8.x. The only versio... See more...
Hi, I need to upgrade Splunk v.8.2.2.1 on RHEL 7.6 to Splunk v.9.4 on RHEL 9.6. I saw that Splunk 8.2 does not support RHEL 9.6 version and the customer cannot upgrade to RHEL 8.x. The only version of Splunk compatible with both versions of RHEL is Splunk 9.0, but it is impossible to download it directly from the splunk site. How can I download this older version? Thank you, Mauro
Hello, Planning to Upgrade Splunk Enterprise from version 9.2.1 to latest version 9.4.2 - So can a 9.4.2 latest version Search Head talk to 9.2.1 indexer? or we need to upgrade Indexers as well to s... See more...
Hello, Planning to Upgrade Splunk Enterprise from version 9.2.1 to latest version 9.4.2 - So can a 9.4.2 latest version Search Head talk to 9.2.1 indexer? or we need to upgrade Indexers as well to same version ? Also Splunk UF 8.0.5 will be able to talk to Indexers ? I read it will be able to talk but only we will not have splunk support for this versions and only we will have P3 support if any issues. Thanks
My linux logs cannot parsed in dashboard. My renderxml is setted to false  
Below is the yaml file configuration, trying to configure the windows to collect data. receivers:   hostmetrics:     collection_interval: 30s     scrapers:       cpu:       memory:       di... See more...
Below is the yaml file configuration, trying to configure the windows to collect data. receivers:   hostmetrics:     collection_interval: 30s     scrapers:       cpu:       memory:       disk:       filesystem:       network:       paging:       processes: exporters:   splunk_hec:     token: ""     endpoint: "https://testsplunk.com:8088"     source: "otelcol"     sourcetype: "_json"     index: "telemetry_test" service:   pipelines:     metrics:       receivers: [hostmetrics]       exporters: [splunk_hec]
Hi Team We have installed npm appdynamics 24.12.0 latest version and that adds below dependent packages which has critical vulnerabilities in package-lock.json.   "appdynamics-libagent-napi"    "a... See more...
Hi Team We have installed npm appdynamics 24.12.0 latest version and that adds below dependent packages which has critical vulnerabilities in package-lock.json.   "appdynamics-libagent-napi"    "appdynamics-native"    "appdynamics-protobuf" Pl let us know resolution for this issue as our application will not support lower version of appdynamics.    Thanks  
I just build a application that contain a dashboard and doesn't want to have an export button and duplicate button on the tops of the dashboard. I tried to remove the export_results_is_visible capabi... See more...
I just build a application that contain a dashboard and doesn't want to have an export button and duplicate button on the tops of the dashboard. I tried to remove the export_results_is_visible capability but the export button is still visible and usable on the application dashboard. Is there any ohter ways to disable them?
  Hi, depending on specific field values I would like to perform different actions per event in one search string with the map command. I will try to create a simple example: 1. If there is an... See more...
  Hi, depending on specific field values I would like to perform different actions per event in one search string with the map command. I will try to create a simple example: 1. If there is an event that includes field=value_1, I would like to remove rows from a lookup that have field=value_1 2. If there is an event that includes field=value_2, I would like to add a row to another lookup. Here is how I create my sample data: | makeresults format=csv data="field value_1 value_2" | eval spl=case(field="value_1","| inputlookup test.csv | search NOT field=\""+field+"\" | outputlookup test_2.csv", field="value_2", "| makeresults | eval field=\""+$field$+"\" | outputlookup test_2.csv") The easiest way I thought of was adding | map search="$spl$" But Splunk seems to put quotes around the value. Avoiding that with the approach described here (https://community.splunk.com/t5/Installation/How-do-you-interpret-string-variable-as-SPL-in-Map-function/m-p/385353) does not work, because I can not use the search command this way. Do you have ideas how to achieve my goal?
I've been creating some new modern playbooks in SOAR for automation. One of the playbooks that I created has a drop down next to it that shows an "outputs" menu with Name, Data Type, and Description ... See more...
I've been creating some new modern playbooks in SOAR for automation. One of the playbooks that I created has a drop down next to it that shows an "outputs" menu with Name, Data Type, and Description fields that are all blank. Only one playbook has this option and all were created from scratch. What caused this output dropdown on the one playbook? The playbook type was created as automation and not input.
I came across in our repo a monitoring stanza for f5, which is [UDP://9514]. I wonder if there is any reason not to use syslog for this case, are there any limitations using syslog vs. direct UDP con... See more...
I came across in our repo a monitoring stanza for f5, which is [UDP://9514]. I wonder if there is any reason not to use syslog for this case, are there any limitations using syslog vs. direct UDP connection? Why would anybody bypass syslog?
Hello, I have a request from a systems manager related to SOX controls. They are requesting information around the local Splunk account that is created when a UF is being installed (this is on a Lin... See more...
Hello, I have a request from a systems manager related to SOX controls. They are requesting information around the local Splunk account that is created when a UF is being installed (this is on a Linux machine). They are asking where the password is stored for this account/who has access to it, and what are the controls around it. They are requesting to make this account non-interactive - would this cause any problems? They would then have to go around to all 200+ UFs and do this, not sure how intuitive this would be. Has anyone encountered requests related to local Splunk UF accounts & SOX controls?
We would like to produce statistics about the usage of Splunk and we would like to categorize the searches by ranges, whether they cover the last day, past week or past month, and I wonder which fiel... See more...
We would like to produce statistics about the usage of Splunk and we would like to categorize the searches by ranges, whether they cover the last day, past week or past month, and I wonder which fields in _audit provide the beginning and end interval of the search.  
I think I know how to do this but I thought it would be best to check with some of the experts here first.   I am upgrading the hardware (storage expansion) on our indexers and this will require tu... See more...
I think I know how to do this but I thought it would be best to check with some of the experts here first.   I am upgrading the hardware (storage expansion) on our indexers and this will require turning off and unplugging each device. Indexers are clustered with a Replication Factor of 2. From what I have read: I can issue the 'splunk offline' command on the indexer I am working on Wait for the indexer to wrap up any tasks Then shut down and unplug the machine to perform this upgrade Once complete, I can plug it back in and turn it back on (make sure Splunk starts running again) Am i missing anything important? Thanks!