All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a question about modify kvstore configuration in search head cluster environment.   I have created kvstore with lookup editor app from a search head instance. Now, I would like to add a new ... See more...
I have a question about modify kvstore configuration in search head cluster environment.   I have created kvstore with lookup editor app from a search head instance. Now, I would like to add a new column. So, I have to modify from collections.conf right?. However, the configuration is not on SHC but search head instances. What is the best way to add a new column of kvstore?   Thank you
I use metadata to monitor the activity status of member nodes in my cluster, but recently I discovered an exception. My SHC member 01 was found to be inactive, and the last time metadata was sent was ... See more...
I use metadata to monitor the activity status of member nodes in my cluster, but recently I discovered an exception. My SHC member 01 was found to be inactive, and the last time metadata was sent was a long time ago. However, when I checked my SHC cluster member status in the background, it was always in the up state, and the last time it was sent to the administrator was also recently. I restarted my member 1, but it seems that the latest time of member 1 cannot be seen in the metadata
Does a Heavy Forwarder support output via HTTPOUT? I've seen conflicting posts saying it's not supported and it is supported. I've configured it and it never attempts to send any traffic.
Hi, I'm attempting to write a search where I return a top 10 of a value. However, I am noticing that I return different top 10's when I rerun the search. Does this happen to anyone else? 
Please extract User-Agent field from the below Json event . httpMessage: { [-]      bytes: 2      host: rbwm-api.sony.co.uk      method: GET      path: /kong/originations-loans-uk-orchestration-... See more...
Please extract User-Agent field from the below Json event . httpMessage: { [-]      bytes: 2      host: rbwm-api.sony.co.uk      method: GET      path: /kong/originations-loans-uk-orchestration-prod-proxy/v24/status      port: 443      protocol: HTTP/1.1      requestHeaders: Content-Type: application/json X-SONY-Locale: en_GB X-SONY-Chnl-CountryCode: GB X-SONY-Chnl-Group-Member: HRFB X-SONY-Channel-Id: WEB Cookie: dspSession=hzxVP-NKKzZIN0wfzk85UD0ji7I.*AAJTSQACMDIAAlNLABxvOTRoWElJS2FEU0wrNlMxdTByMGtGN2JYM289AAR0eXBlAANDVFMAAlMxAAI0NQ..* Accept: */* User-Agent: node-fetch/1.0 ( https://github.com/bitn/node-fetch) Accept-Encoding: gzip,deflate Host: rbwm-api.sony.co.uk Connection: close remove-dup-edge-ctrl-headers-rollout-enabled: 1 httpMessage.requestHeaders field values are extracting but only want User-Agent field and values to be extracted from all values. Please help me with this.  
I am using Enterprise 9.3.2, ES 8.1.0, and SOAR 6.4.1 to test the pairing function. Both devices are on-premises and in the same subnet, with no network issues between them. However, when I try to us... See more...
I am using Enterprise 9.3.2, ES 8.1.0, and SOAR 6.4.1 to test the pairing function. Both devices are on-premises and in the same subnet, with no network issues between them. However, when I try to use the pairing function in ES, the following error message appears: "Cannot connect to SOAR. Check that the ES IP address is included on the SOAR stack allow list." When I check the internal log, it shows the following error: "Unexpected error when attempting pairing: HTTPSConnectionPool(host='xxx.xxx.xxx.xxx', port=8443): Max retries exceeded with URL: /rest/version (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1143)')))". Does anyone have any ideas on how to resolve this?
Hi all,   I am trying to develop a custom command. The custom command works as expected and now I am working to setup proper logging, but I can't seem to be able to make the python script log anyt... See more...
Hi all,   I am trying to develop a custom command. The custom command works as expected and now I am working to setup proper logging, but I can't seem to be able to make the python script log anything or I'm looking in the wrong place. I built it following what's written here: Create a custom search command | Documentation | Splunk Developer Program Here's a quick python code example: #!/usr/bin/env python # coding=utf-8 # # Copyright © 2011-2015 Splunk, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"): you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os, sys, requests, json sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "lib")) from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators, splunklib_logger as logger @Configuration() class TestCustomCMD(StreamingCommand): def getFieldValue(self, field_name, record): return record[field_name] if field_name in record else "" def writeFieldValue(self, field_name, field_value, record): record[field_name] = field_value def stream(self, records): for record in records: self.writeFieldValue("TEST FIELD", "TEST CUSTOM COMMAND", record) logger.fatal("FATAL logging example") logger.error("ERROR logging example") logger.warning("WARNING logging example") logger.info("INFO logging example") yield record dispatch(TestCustomCMD, sys.argv, sys.stdin, sys.stdout, __name__)   command.conf: [testcustcmd] filename = test_custom_command.py python.version = python3 chunked = true   and search to test: | makeresults count=2 | testcustcmd   The search completes correctly and returns this:   However, I don't find the logged lines anywhere. On my Splunk server I ran this: grep -rni "logging example" "/opt/splunk/var/log/splunk/"   But the result is empty. Can you help me understand what I am doing wrong here?   Thank you in advance, Tommaso
Hi, Is there any option I can add banners in the AppDynamics dashboard in case of application maintenance or server maintenance notifications? Appreciate your suggestions.   Thanks, Raj
I am doing load testing for the persistent queue how the queue behaves  we have deployed in the argcd yaml so how can i do it  target:     kind: Application     name: splunk   patch: |-     ap... See more...
I am doing load testing for the persistent queue how the queue behaves  we have deployed in the argcd yaml so how can i do it  target:     kind: Application     name: splunk   patch: |-     apiVersion: argoproj.io/v1alpha1     kind: Application     metadata:       name: splunk     spec:       source:         helm:           parameters:           - name: clusterName             value: -QA           - name: distribution             value: openshift           - name: splunkObservability.realm             value: eu0           - name: splunkPlatform.endpoint             value: 'https://131.97/services/collector'           - name: splunkPlatform.index             value: 122049           - name: splunkPlatform.insecureSkipVerify             value: "true"           - name: splunkPlatform.sendingQueue.persistentQueue.enabled             value: "true"           - name: splunkPlatform.sendingQueue.persistentQueue.storagePath             value: "/var/addon/splunk/exporter_queue"           - name: agent.resources.limits.memory             value: "0.5Gi" can someone help me how can i do it 
i have done this, but nothing i can't see in event viewer. what's the problem? 
Hello Everyone, I have 2 splunk search queries query-1 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE... See more...
Hello Everyone, I have 2 splunk search queries query-1 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/user-registration" | dedup req_id | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) output  url method kubernetes_cluster hits avgResponse nintyPerc /my_service/user-registration POST LON 11254 112 535   query-2 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/profile-retrieval" | eval normalized_url="/my_service/profile-retrieval" | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by normalized_url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) output url method kubernetes_cluster hits avgResponse nintyPerc /my_service/profile-retrieval GET LON 55477 698 3423   The query-2 returns multiple urls like below but belongs to same endpoint: /my_service/profile-retrieval/324524352 /my_service/profile-retrieval/453453?displayOptions=ADDRESS%2CCONTACT&programCode=SKW /my_service/profile-retrieval/?displayOptions=PREFERENCES&programCode=SKW&ssfMembershipId=00408521260 Hence I used eval function to normalized them eval normalized_url="/my_service/profile-retrieval" How do I combine both queries to return as simplified output url method kubernetes_cluster hits avgResponse nintyPerc /my_service/user-registration POST LON 11254 112 535 /my_service/profile-retrieval GET LON 55477 698 3423   Highly appreciate your help!!
Hi Everyone,  I am experiencing an error when sending events from Mission Control to Splunk SOAR. I always get a failure when the send to SOAR action is automatically triggered through Adaptive Resp... See more...
Hi Everyone,  I am experiencing an error when sending events from Mission Control to Splunk SOAR. I always get a failure when the send to SOAR action is automatically triggered through Adaptive Response. Before I automated it, I tried to send event data from Mission Control to SOAR manually by clicking the three dots and then selecting 'Run Adaptive Response Actions' and everything went smoothly. Has anyone ever experienced a similar problem? Danke, Zake  
Thought I would post here in the community as well since I have this opened with support. A couple weeks ago, another agency pushed updates to Splunk Universal Forwarder to half of my hosts without m... See more...
Thought I would post here in the community as well since I have this opened with support. A couple weeks ago, another agency pushed updates to Splunk Universal Forwarder to half of my hosts without my knowledge or consent. Those hosts were updated to 9.2.6.0 from 9.2.0.1. The updates went unnoticed for a couple weeks since the events from our custom application and Event Viewer continued to get indexed.  I started to notice an issue on one dashboard where no perfmon events were coming in. I reviewed another dashboard that checks the status of my forwarders and that's where I saw the updated installs. I went over the index the perfmon counters go to and validated only the hosts that were using Universal Forwarder 9.2.0.1 were coming in.  My version of Enterprise was 9.2.1.0 and support recommended I update Enterprise to a newer version. After some testing, I went to Enterprise 9.3.5.0. Not ready for 9.4.X with trying to update the kvstore. Reviewing the Universal Forwarder compatibility matrix, I've kept my Universal Forwarders on 9.2.0.1, 9.2.6.0, and two were updated to 9.3.5.0. Updating Enterprise didn't correct the issue.  I went through troubleshooting on the host looking over the config files. I did a rebuild of the resource counters and restarted the splunk forwarder service on one of the hosts using forwarder 9.2.6.0.  I've looking at one of the hosts by adding the service account used as a local member of the administrators and Remote Management Users groups, adding a path variable for SPLUNK_HOME at "c:\program files\splunkuniversalforwarder".  Chatted with the tech who pushed universal forwarder and they're not going to do that again. The hosts that got updated are members of my custom applications lower environments. I can live without the perfmon counters in the lower environments and none of my hosts in our production environment were updated. I know if I uninstall Forwarder and reinstall 9.2.0.1, the perfmon counters will resume coming in.  Convinced its a change I need to do and thought I would check with the community who have updated their forwarders. I attached a copy of the inputs.conf from one of my hosts which is the same for all of them (aside from the environment name)  
I used the metric finder to graph jvm.gc.duration_count, then exported the results to CSV.  I also have a SignalFlow API call to grab the same data. The counts are the same except they are offset by... See more...
I used the metric finder to graph jvm.gc.duration_count, then exported the results to CSV.  I also have a SignalFlow API call to grab the same data. The counts are the same except they are offset by 5 minutes.  IOW, my SignalFlow output says 303 GCs at 15:11 but the metric finder export shows the same 303 GCs at 15:16.  Subsequent periods are offset in the same way. My code is using ChannelMessage.DataMessage.getLogicalTimestampMs(). Postman output looks like this: data: { data: "data" : [ { data: "tsId" : "AAAAAMcvg8Q", data: "value" : 1.0 data: }, { data: "tsId" : "AAAAAKgFlvo", data: "value" : 303.0 data: } ], data: "logicalTimestampMs" : 1750709460000, data: "maxDelayMs" : 12000 data: } What's going on?   thanks  
I am logged in as the admin user, but whenever I try to access Tokens, Users, or other settings pages, I get a blank page. I’m not sure what to do next. #Splunk #Enterprise
So I have successfully configured some reports and alerts that send the $result to Mattermost. My question is how to deal with a search which returns maybe 5 results? Example - Current search may r... See more...
So I have successfully configured some reports and alerts that send the $result to Mattermost. My question is how to deal with a search which returns maybe 5 results? Example - Current search may return - Example Text : Hello World How do I pass each individual  $result ? So Search could return Hello World followed by Hello World2 followed by Hello World3  If I put $result.text$ it prints Hello World but if I want to then show the second result or 3rd...is it possible through this>?
Hello, We have multiple fortigate devices forwarding to a logstash server that is storing all the device's logs in 1 file (I can't change this unfortunately). This is then forwarding to our HF, and ... See more...
Hello, We have multiple fortigate devices forwarding to a logstash server that is storing all the device's logs in 1 file (I can't change this unfortunately). This is then forwarding to our HF, and then to Splunk Cloud. This then enters splunk with sometimes 20+ logs in a single event, and I can't get them to parse out into individual events by host. Below are samples of 2 logs, but in a single event there could be 20+ logs - I cannot get this to parse correctly out into each event per host (redact). {"log":{"syslog":{"priority":189}},"host":{"hostname":"redact"},"fgt":{"proto":"1","tz":"+0200","vpntype":"ipsecvpn","rcvdbyte":"3072","policyname":"MW","type":"traffic","identifier":"43776","trandisp":"noop","logid":"0001000014","srcintfrole":"undefined","policyid":"36","rcvdpkt":"3","vd":"root","duration":"180","dstintfrole":"undefined","dstip":"10.53.6.1","level":"notice","eventtime":"1750692044675283970","policytype":"policy","subtype":"local","srcip":"10.53.4.119","dstintf":"root","srcintf":"HUB1-VPN1","sessionid":"5612390","action":"accept","service":"PING","app":"PING","sentbyte":"3072","sentpkt":"3","dstcountry":"Reserved","poluuid":"cb0c79de-2400-51f0-7067-d28729f733cf","srccountry":"Reserved"},"timestamp":"2025-06-23T15:20:45Z","data_stream":{"namespace":"default","dataset":"fortinet.fortigate","type":"logs"},"@timestamp":"2025-06-23T15:20:45.000Z","type":"fortigate","logstash":{"hostname":"no_logstash_hostname"},"tags":["_grokparsefailure"],"@version":"1","system":{"syslog":{"version":"1"}},"event":{"created":"2025-06-23T15:20:45.563831683Z","original":"<189>1 2025-06-23T15:20:45Z redact - - - - eventtime=1750692044675283970 tz=\"+0200\" logid=\"0001000014\" type=\"traffic\" subtype=\"local\" level=\"notice\" vd=\"root\" srcip=10.53.4.119 identifier=43776 srcintf=\"redact\" srcintfrole=\"undefined\" dstip=10.53.6.1 dstintf=\"root\" dstintfrole=\"undefined\" srccountry=\"Reserved\" dstcountry=\"Reserved\" sessionid=5612390 proto=1 action=\"accept\" policyid=36 policytype=\"policy\" poluuid=\"cb0c79de-2400-51f0-7067-d28729f733cf\" policyname=\"MW\" service=\"PING\" trandisp=\"noop\" app=\"PING\" duration=180 sentbyte=3072 rcvdbyte=3072 sentpkt=3 rcvdpkt=3 vpntype=\"ipsecvpn\""},"observer":{"ip":"10.53.12.113"}} {"log":{"syslog":{"priority":189}},"host":{"hostname":"redact"},"fgt":{"proto":"1","tz":"+0200","rcvdbyte":"3072","policyname":"redact (ICMP)","type":"traffic","identifier":"43776","trandisp":"noop","logid":"0001000014","srcintfrole":"wan","policyid":"40","rcvdpkt":"3","vd":"root","duration":"180","dstintfrole":"undefined","dstip":"10.52.25.145","level":"notice","eventtime":"1750692044620716079","policytype":"policy","subtype":"local","srcip":"10.53.4.119","dstintf":"root","srcintf":"wan1","sessionid":"8441941","action":"accept","service":"PING","app":"PING","sentbyte":"3072","sentpkt":"3","dstcountry":"Reserved","poluuid":"813c45e0-3ad6-51f0-db42-8ec755725c23","srccountry":"Reserved"},"timestamp":"2025-06-23T15:20:45Z","data_stream":{"namespace":"default","dataset":"fortinet.fortigate","type":"logs"},"@timestamp":"2025-06-23T15:20:45.000Z","type":"fortigate","logstash":{"hostname":"no_logstash_hostname"},"tags":["_grokparsefailure"],"@version":"1","system":{"syslog":{"version":"1"}},"event":{"created":"2025-06-23T15:20:45.639474828Z","original":"<189>1 2025-06-23T15:20:45Z redact - - - - eventtime=1750692044620716079 tz=\"+0200\" logid=\"0001000014\" type=\"traffic\" subtype=\"local\" level=\"notice\" vd=\"root\" srcip=10.53.4.119 identifier=43776 srcintf=\"wan1\" srcintfrole=\"wan\" dstip=10.52.25.145 dstintf=\"root\" dstintfrole=\"undefined\" srccountry=\"Reserved\" dstcountry=\"Reserved\" sessionid=8441941 proto=1 action=\"accept\" policyid=40 policytype=\"policy\" poluuid=\"813c45e0-3ad6-51f0-db42-8ec755725c23\" policyname=\"redact (ICMP)\" service=\"PING\" trandisp=\"noop\" app=\"PING\" duration=180 sentbyte=3072 rcvdbyte=3072 sentpkt=3 rcvdpkt=3"},"observer":{"ip":"10.52.31.14"}}   I have edited props.conf to contain the following stanza, but still no luck:   [fortigate_log] SHOULD_LINEMERGE = false LINE_BREAKER = }(\s*)\{   Any direction on where to go from here? 
Is it possible to get the upstream service details(Calling service) to an inferred service through metrics ? Through inbuilt dimensions there is no option. Can someone suggest if its possible ?
Hi,  I am having an issue trying to make a version of the search app filtering timeline work in my dashboard in Dashboard Studio with other visualizations. I have set a token interaction on click to... See more...
Hi,  I am having an issue trying to make a version of the search app filtering timeline work in my dashboard in Dashboard Studio with other visualizations. I have set a token interaction on click to update the global_time.earliest value as to the time that is clicked on the chart. I, however, am running into an issue where I cannot set the global_time.latest value by clicking again on the timechart. If I set up a second token interaction to get the latest time, it just sets it to the same as the earliest, all on the first click. I'm trying to filter it down to each bar's representation on the timechart, which is 2 hours ( |timechart span=2h ...).  Like the search apps version, this timechart is meant to be a filtering tool that will only filter down the search times of the other visualizations once it is set. Setting the earliest token works perfectly fine; it's all just about the latest. I just need to know how or if it is possible. Thank you!!
Hey mates, I'm new to Splunk and while ingesting the data from my local machine to Splunk this message shows up. "The TCP output processor has paused the data flow. Forwarding to host_dest=192.XXX.X... See more...
Hey mates, I'm new to Splunk and while ingesting the data from my local machine to Splunk this message shows up. "The TCP output processor has paused the data flow. Forwarding to host_dest=192.XXX.X.XX inside output group default-auto lb-group from host_src=MRNOOXX has been blocked for blocked_seconds=10. This can stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data." Kindly help me. Thank you