All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Does a Heavy Forwarder support output via HTTPOUT? I've seen conflicting posts saying it's not supported and it is supported. I've configured it and it never attempts to send any traffic.
The issue has been identified. When the other agency pushed SplunkForwarder 9.2.6.0 to my hosts, NT SERVICE/SplunkForwarder was removed as a member of the performance monitor users’ group. That agenc... See more...
The issue has been identified. When the other agency pushed SplunkForwarder 9.2.6.0 to my hosts, NT SERVICE/SplunkForwarder was removed as a member of the performance monitor users’ group. That agency used Ivanti Patch for Endpoint Manager to push the updates. With one of the hosts on 9.2.6.0, I kicked off the repair to SplunkForwarder and perfmon counters started to come in at the interval that was set for that host. I next moved to a powershell command to add NT SERVICE/SplunkForwarder back as a member of the performance monitor users group. I asked the tech for a copy of the syntax used to push SplunkForwarder to my hosts to go over and validate. I’m asking support about it too, has there been any known issues with Ivanti pushing SplunkForwarder updates?
Hi @splunklearner  Try the following: | rex field=requestHeaders "User-Agent: (?<useragent>.*?)(?=\s+\w+-?[\w-]*: )"  Did this answer help you? If so, please consider: Adding karma to show... See more...
Hi @splunklearner  Try the following: | rex field=requestHeaders "User-Agent: (?<useragent>.*?)(?=\s+\w+-?[\w-]*: )"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi there, This is not really a Splunk question; I would start here https://argo-cd.readthedocs.io/en/stable/ cheers, MuS
What have you tried so far?  What were the results?
Are you using the exact same time span each time?  That is, not '-24h', but "2025-06-23:15:10:00". Please share your SPL.
Hi @questionsdaniel  Are you re-running it against the exact same earliest/latest time? Also - the top command can only process the first 50,000 results - so its likely that the first 50,000 events... See more...
Hi @questionsdaniel  Are you re-running it against the exact same earliest/latest time? Also - the top command can only process the first 50,000 results - so its likely that the first 50,000 events returned from your indexers are different each time (arrive in a different order for example) which is why you see different values.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
This is the _raw data.   "requestHeaders":"X-sony-PSD2-CountryCode: GB\r\nX-sony-Request-Correlation-Id: 50977be2-f86c-451a-b318-50b4dfc46b4a\r\nX-sony-Secondary-Id: 1614874131\r\nUser-Agent: Mozil... See more...
This is the _raw data.   "requestHeaders":"X-sony-PSD2-CountryCode: GB\r\nX-sony-Request-Correlation-Id: 50977be2-f86c-451a-b318-50b4dfc46b4a\r\nX-sony-Secondary-Id: 1614874131\r\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36\r\nX-sony-Channel-Id: OPENBANK\r\nX-sony-TPP-Journey: AISP\r\nX-sony-Locale: GB\r\nToken_Type: ACCESS_TOKEN\r\nX-sony-SoR-CountryCode: GB\r\nx-fapi-interaction-id: 80c0c1c4-ab24-4cc3-9169-4ef8ecfa90ba\r\nX-sony-Tpp-Name: TrueLayer Limited\r\nContent-Type: application/json\r\nX-sony-Global-Channel-Id: OPENBANK\r\nAccept: application/json\r\nX-sony-Client-Id: 5ec4d197-f5f9-432d-8201-e55618ba970e\r\nX-sony-Chnl-CountryCode: GB\r\nX-sony-Chnl-Group-Member: HRFB\r\nX-sony-Tpp-Id: 001580000103UAAAA2\r\nX-sony-Session-Correlation-Id: 4137bff6-c7e2-40f9-a1ca-699f59bcd6ed\r\nX-sony-Source-System-Id: 4910787\r\nX-sony-TPP-URL: https://api.ob.sony.co.uk/obie/open-banking/v4.0/aisp/accounts/50l6Ph5oSYfmYYnARlvAWtNimns1vO1Vo-r/transactions?fromBookingDateTime=2025-05-24T17%3A18%3A55&toBookingDateTime=2025-06-23T23%3A59%3A59\r\nX-sony-GBGF: RBWM\r\nx-sony-consumer-id: OPENBANKING.OBK_MULESOFT_P\r\nX-sony-Username: arielle1@\r\nX-Forwarded-For: 176.34.193.116\r\nX-sony-Client-Name: TrueLayer\r\nX-sony-Software-Id: gdce9LdcLmKHv2MoEtKdPe\r\nX-Amzn-Trace-Id: Root=1-685ae0f4-a3640d152af9aa6aa7092caa;Sampled=0\r\nHost: rbwm-api.sony.co.uk\r\nConnection: Keep-Alive\r\nAccept-Encoding: gzip,deflate\r\nremove-dup-edge-ctrl-headers-rollout-enabled: 1\r\n",
Hi, I'm attempting to write a search where I return a top 10 of a value. However, I am noticing that I return different top 10's when I rerun the search. Does this happen to anyone else? 
Please extract User-Agent field from the below Json event . httpMessage: { [-]      bytes: 2      host: rbwm-api.sony.co.uk      method: GET      path: /kong/originations-loans-uk-orchestration-... See more...
Please extract User-Agent field from the below Json event . httpMessage: { [-]      bytes: 2      host: rbwm-api.sony.co.uk      method: GET      path: /kong/originations-loans-uk-orchestration-prod-proxy/v24/status      port: 443      protocol: HTTP/1.1      requestHeaders: Content-Type: application/json X-SONY-Locale: en_GB X-SONY-Chnl-CountryCode: GB X-SONY-Chnl-Group-Member: HRFB X-SONY-Channel-Id: WEB Cookie: dspSession=hzxVP-NKKzZIN0wfzk85UD0ji7I.*AAJTSQACMDIAAlNLABxvOTRoWElJS2FEU0wrNlMxdTByMGtGN2JYM289AAR0eXBlAANDVFMAAlMxAAI0NQ..* Accept: */* User-Agent: node-fetch/1.0 ( https://github.com/bitn/node-fetch) Accept-Encoding: gzip,deflate Host: rbwm-api.sony.co.uk Connection: close remove-dup-edge-ctrl-headers-rollout-enabled: 1 httpMessage.requestHeaders field values are extracting but only want User-Agent field and values to be extracted from all values. Please help me with this.  
I have only dabbled in this area, but I am pretty sure (not going to place any bets in Las Vegas on it though) that your python code is going to determine where you write the logs.  It would be aweso... See more...
I have only dabbled in this area, but I am pretty sure (not going to place any bets in Las Vegas on it though) that your python code is going to determine where you write the logs.  It would be awesome if it does write to the splunk logs area, but I am pretty sure it does not.  I don't think you specified a location for where you are writing the logs, so my guess is the python script is going to try to write the logs to the same location as your .py script.  Now someone could come along and tell me I am completely wrong, and that is ok, because as I said, this is something I have only done a couple times and it was years ago. If I were going to troubleshoot to find out where the logs are being written, the first thing I would do is spin up my local / dev instance of Splunk.  I really encourage anyone doing development, especially on something as complicated as what you are doing, that you have a dev instance - whether that be on a spare computer or laptop or spin up a local vm, or whatever, but it is really difficult to troubleshoot on a production environment and from an app dev perspective this is also a good practice to have a test environment.  On this test box, you should have access to the command line.  Put your code on that test box and then look and see where the logs are being written.   Sorry I don't have any silver bullet, but my guess is that the log file is "trying" to be written to the same location as your python script, and that means your inputs.conf needs to be pointed there as well or it won't be able to grab it.  (I don't recommend writing your logs to your scripts folder so you probably will want to change its location in the python script)
I am using Enterprise 9.3.2, ES 8.1.0, and SOAR 6.4.1 to test the pairing function. Both devices are on-premises and in the same subnet, with no network issues between them. However, when I try to us... See more...
I am using Enterprise 9.3.2, ES 8.1.0, and SOAR 6.4.1 to test the pairing function. Both devices are on-premises and in the same subnet, with no network issues between them. However, when I try to use the pairing function in ES, the following error message appears: "Cannot connect to SOAR. Check that the ES IP address is included on the SOAR stack allow list." When I check the internal log, it shows the following error: "Unexpected error when attempting pairing: HTTPSConnectionPool(host='xxx.xxx.xxx.xxx', port=8443): Max retries exceeded with URL: /rest/version (Caused by SSLError(SSLError(1, '[SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1143)')))". Does anyone have any ideas on how to resolve this?
Thanks, I think you're on to something. In the UI, I set the resolution to low, which was 2 minutes for a 1-hour window.  I changed my query to set the start and end times to the same hour range and... See more...
Thanks, I think you're on to something. In the UI, I set the resolution to low, which was 2 minutes for a 1-hour window.  I changed my query to set the start and end times to the same hour range and set the resolution to 2 minutes (I was using null before.) Now I get matching results. But before I set the resolution, which result was "correct?"  What I'm actually doing is recording metrics like GC counts, CPU usage, etc, during a performance test in LoadRunner so I can use LR's Analysis tool to combine the data LR produces with the data APM produces.  I plan to run the query once per minute as the test runs and grab the most recent minute of data.  Will I actually get the most recent minute?  I'm assuming that since the UI appeared to lag, then the query results are closer to being correct. thanks
@LAME-Creations: This question is in the AppDynamics part of the community so I'm not sure if your statements about SOAR are fitting the topic. Regarding the original question: I'm assuming you're ... See more...
@LAME-Creations: This question is in the AppDynamics part of the community so I'm not sure if your statements about SOAR are fitting the topic. Regarding the original question: I'm assuming you're using AppDynamics on-premise. You can display a message to users logging in which I've used in the past to notify users of upcoming maintenance. There is a setting in admin.jsp called "system.use.notification.message" which, when set, will display a message at login to every user. You can find documentation for this feature here: https://docs.appdynamics.com/appd/onprem/24.x/latest/en/controller-deployment/administer-the-controller/customize-system-notifications#id-.CustomizeSystemNotificationsv21.4-ConfigureSystemUseNotification
Hi @LAME-Creations  Thank you for your answer! That's actually one of the open points. I have no idea if my logs are actually being written or not. I assumed that, if they are logged, they would hav... See more...
Hi @LAME-Creations  Thank you for your answer! That's actually one of the open points. I have no idea if my logs are actually being written or not. I assumed that, if they are logged, they would have been logged in the standard Splunk log folder /opt/splunk/var/log/splunk, but I might be wrong here. Do you have any suggestion that could help me understand if I am simply looking in the wrong place?
This is not my area of expertise, I have done a little bit with the custom commands to know you are on the right path, but looking at your code is not making anything obvious pop up, but I do know th... See more...
This is not my area of expertise, I have done a little bit with the custom commands to know you are on the right path, but looking at your code is not making anything obvious pop up, but I do know the most common issues to ingesting logs that are created by firing off splunk actions (which is what you are doing) is rights. I have had two issues pop up when doing what you are doing, one was easy to fix, the other was one that resulted in a lot of head banging and frustration but I found a workaround that worked in our environment.  The first thing you want to validate is that logs are actually being created.  I am sure you are doing this, but as a person who has done everything wrong in Splunk, I have actually tried to troubleshoot why my logs are not coming in only to find that no physical logs actually exist.  After you verify that I recommend putting a "test" log in the exact same site as your python logs.  Can you ingest the "test" log.  If you can't, you know that it is probably related to rights.   If you can have splunk ingest the "test" logs you may be in the glitchy world that no one has ever truly explained to me, but it relates to where you write the logs.  Logs that are dynamically created in Splunk (which is what you are doing) for some reason could not be read in certain locations on the disk, even though it could read the "test" logs.   So ultimately I found that I had to change the location of the logs being written to another location on disk and then pull the logs from there.  You are making me have to think back on painful traumatic times, but I think anytime I tried to write the logs inside the directory of the app that was built to make the custom command, it would not read.  But when I moved it to /var/logs it worked.  I hope this is not the problem you run into, and hopefully the pain and trauma I suffered from has been fixed or there was some other underlying issue that I was dealing with and you will never have to experience this, but it was enough to make me still wake up in the middle of the night with nightmares   But hopefully everything can be attributed to Splunk not being able to read the location of your python logs, you change the permissions and everything works.  If not, hopefully someone has a silver bullet in these answer forums, and if that doesn't work just try different locations on your OS and see if they work (I know this cannot be the true answer, but it was what ultimately worked for me)
The most common thing I have heard for notifying users about changes is the notification messages and the global banner.  The messages are not my favorite method because they are often missed and if ... See more...
The most common thing I have heard for notifying users about changes is the notification messages and the global banner.  The messages are not my favorite method because they are often missed and if someone dismisses them, they are dismissed for everyone, so I encourage using a global banner.   If you are using the Web interface to do this, banners can be changed by going to the upper right settings - server controls - global banner.   This is probably all you need, but if you ever run into a situation that we had where we had standalone search heads geographically dispersed across the world, we needed a method to be able to change hundreds of search heads at one time, we used SOAR automation to do this.  We would create a single dashboard that would send an alert to SOAR phantom and that would than trigger an alert that would go to SOAR and then SOAR would ssh into all of the boxes and issue the necessary changes.  This was a really cool process, but I am assuming you don't have that problem and you can just go into the web interface of one Search Head and change your banner and you will be good.   Here is a video of using soar and ssh and curl to send messages.     https://youtu.be/gd5xDNGEsoU I did not make a video of using SOAR and ssh and modifying the global-banner.conf file, but that is the file you would be looking to modify if you want to change banner notifications.  
Hi all,   I am trying to develop a custom command. The custom command works as expected and now I am working to setup proper logging, but I can't seem to be able to make the python script log anyt... See more...
Hi all,   I am trying to develop a custom command. The custom command works as expected and now I am working to setup proper logging, but I can't seem to be able to make the python script log anything or I'm looking in the wrong place. I built it following what's written here: Create a custom search command | Documentation | Splunk Developer Program Here's a quick python code example: #!/usr/bin/env python # coding=utf-8 # # Copyright © 2011-2015 Splunk, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"): you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os, sys, requests, json sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "lib")) from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators, splunklib_logger as logger @Configuration() class TestCustomCMD(StreamingCommand): def getFieldValue(self, field_name, record): return record[field_name] if field_name in record else "" def writeFieldValue(self, field_name, field_value, record): record[field_name] = field_value def stream(self, records): for record in records: self.writeFieldValue("TEST FIELD", "TEST CUSTOM COMMAND", record) logger.fatal("FATAL logging example") logger.error("ERROR logging example") logger.warning("WARNING logging example") logger.info("INFO logging example") yield record dispatch(TestCustomCMD, sys.argv, sys.stdin, sys.stdout, __name__)   command.conf: [testcustcmd] filename = test_custom_command.py python.version = python3 chunked = true   and search to test: | makeresults count=2 | testcustcmd   The search completes correctly and returns this:   However, I don't find the logged lines anywhere. On my Splunk server I ran this: grep -rni "logging example" "/opt/splunk/var/log/splunk/"   But the result is empty. Can you help me understand what I am doing wrong here?   Thank you in advance, Tommaso
No I was checked there; by the way, it was Splunk glitch. I solved it. Thank you. 
Hello @mwmw, I can see on Splunkbase that the JAMF Pro add-on for Splunk is now cloud compatible. The latest version was released on June only. Thanks, Tejas.