All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi fsoengn, Thanks for the reply. Mine is AppDynamics SaaS. So the requirement is when we are sharing a dashboard & if there are any maintenance activities there, we need to add a message/banner/sc... See more...
Hi fsoengn, Thanks for the reply. Mine is AppDynamics SaaS. So the requirement is when we are sharing a dashboard & if there are any maintenance activities there, we need to add a message/banner/scroll @ the top so that they will know the application is down due to maintenance. I didn't find any option currently, but I do remember older versions having something similar. Thanks.  
Yes this is possible by using force_local_processing=true   force_local_processing = <boolean> * Forces a universal forwarder to process all data tagged with this sourcetype locally before forwardi... See more...
Yes this is possible by using force_local_processing=true   force_local_processing = <boolean> * Forces a universal forwarder to process all data tagged with this sourcetype locally before forwarding it to the indexers. * Data with this sourcetype is processed by the linebreaker, aggerator, and the regexreplacement processors in addition to the existing utf8 processor. * Note that switching this property potentially increases the cpu and memory consumption of the forwarder. * Applicable only on a universal forwarder. * Default: false You should carefully consider if this option is right for you before deploying it. Read and understand the warning in the spec file (above). By parsing on a UF you are creating a "special snowflake" in your environment where data is parsed somewhere unusual. Props.conf [my_sourcetype] # Use with caution. In most cases its best to let the the parsing occur on a Splunk enterprise server force_local_processing = true LINE_BREAKER = ([\r\n]+) SHOULD_LINEMERGE = false MAX_TIMESTAMP_LOOKAHEAD = ... TIME_FORMAT = ... TIME_PREFIX = ^ TRANSFORMS = my_sourcetype_dump_extra_events Transforms.conf [my_sourcetype_dump_extra_events] REGEX = discard_events_that_match_this_regex DEST_KEY = queue FORMAT = nullQueue Note that if you want to nullqueue/discard all events EXCEPT for those that match a regular expression, the usual documented method won't work (as far as my testing has revealed): https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad#Filter_event_data_and_send_to_queues You will instead need to use a negative assertion REGEX like so: [my_sourcetype_dump_extra_events] REGEX = ^((?!keep_events_that_match_this_regex).)*$ DEST_KEY = queue FORMAT = nullQueue In my testing, discard events on UF's using force_local_processing and a negative assertion caused no measurable increase in CPU, Memory, Disk IO or Network traffic. I used the below query to check how much data was being sent from the UF to the indexers, and it showed a huge reduction: | mstats sum(spl.mlog.tcpin_connections.kb) as kb where index=_metrics group="tcpin_connections" fwdType="uf" hostname=UF_NAME span=5m | timechart span=5m sum(kb)  
I have a data source of significant size and I want to filter a large percentage of the data on the UF so it isnt sent to the Splunk indexers. How can this be done?
I have a question about modify kvstore configuration in search head cluster environment.   I have created kvstore with lookup editor app from a search head instance. Now, I would like to add a new ... See more...
I have a question about modify kvstore configuration in search head cluster environment.   I have created kvstore with lookup editor app from a search head instance. Now, I would like to add a new column. So, I have to modify from collections.conf right?. However, the configuration is not on SHC but search head instances. What is the best way to add a new column of kvstore?   Thank you
I used spath command but didn't work.
OK, cool, thanks for the more info. That’s giving me more confidence that what you were originally seeing is a reflection of data roll-ups. That being said, I think the “more recent” timestamp is “co... See more...
OK, cool, thanks for the more info. That’s giving me more confidence that what you were originally seeing is a reflection of data roll-ups. That being said, I think the “more recent” timestamp is “correct” because the value at that timestamp represents a roll-up of the past X amount of time. For your use-case and reliably grabbing the “past minute”, I wonder if it would be a good idea to make that minute well-defined by specifying start_time and end_time instead of just “-1m” so you avoid edge cases where a datapoint might arrive late for some reason out of your control (network latency, java agent metric export, etc). So maybe the minute you query is something like from (now -2 mins) to (now -1 min). Once the data arrives at the Observability Cloud ingest endpoint, I don’t think you have to worry about any delay with ingest. The data will be recorded as it streams in even when something like a chart visualization appears to have a delay in drawing. I’d be more concerned about any potential latency from the point in time that the metric is recorded (e.g. garbage collection in the java agent) to the time it takes for the agent to export that datapoint and the time it takes for that datapoint to traverse the network to the ingest endpoint. The timestamp on the datapoint will reflect the time it was recorded even if takes extra time for that datapoint to arrive at ingest (e.g., datapoint is recorded by java agent at 16:04:01 but arrives at ingest endpoint at 16:04:45 due to some temporary network condition)
I use metadata to monitor the activity status of member nodes in my cluster, but recently I discovered an exception. My SHC member 01 was found to be inactive, and the last time metadata was sent was ... See more...
I use metadata to monitor the activity status of member nodes in my cluster, but recently I discovered an exception. My SHC member 01 was found to be inactive, and the last time metadata was sent was a long time ago. However, when I checked my SHC cluster member status in the background, it was always in the up state, and the last time it was sent to the administrator was also recently. I restarted my member 1, but it seems that the latest time of member 1 cannot be seen in the metadata
I usually use a combination of the .conf VSCode linter that others have suggested for writing, and then upon committing I have AppInspect and the Splunk Packaging Tool run for my apps, and this keeps... See more...
I usually use a combination of the .conf VSCode linter that others have suggested for writing, and then upon committing I have AppInspect and the Splunk Packaging Tool run for my apps, and this keeps them bug free and knowing that I will pass cloud verification. I will also drop these since I wrote them and am biased, but I use them myself for writing SPL in VSCode: Splunk Search Syntax Highlighter Extension  and Splunk Search Autocompletion Tool 
Try something like this | rex field=httpMessage.requestHeaders "User-Agent: (?<useragent>.*?)\\r\\n"
Does a Heavy Forwarder support output via HTTPOUT? I've seen conflicting posts saying it's not supported and it is supported. I've configured it and it never attempts to send any traffic.
The issue has been identified. When the other agency pushed SplunkForwarder 9.2.6.0 to my hosts, NT SERVICE/SplunkForwarder was removed as a member of the performance monitor users’ group. That agenc... See more...
The issue has been identified. When the other agency pushed SplunkForwarder 9.2.6.0 to my hosts, NT SERVICE/SplunkForwarder was removed as a member of the performance monitor users’ group. That agency used Ivanti Patch for Endpoint Manager to push the updates. With one of the hosts on 9.2.6.0, I kicked off the repair to SplunkForwarder and perfmon counters started to come in at the interval that was set for that host. I next moved to a powershell command to add NT SERVICE/SplunkForwarder back as a member of the performance monitor users group. I asked the tech for a copy of the syntax used to push SplunkForwarder to my hosts to go over and validate. I’m asking support about it too, has there been any known issues with Ivanti pushing SplunkForwarder updates?
Hi @splunklearner  Try the following: | rex field=requestHeaders "User-Agent: (?<useragent>.*?)(?=\s+\w+-?[\w-]*: )"  Did this answer help you? If so, please consider: Adding karma to show... See more...
Hi @splunklearner  Try the following: | rex field=requestHeaders "User-Agent: (?<useragent>.*?)(?=\s+\w+-?[\w-]*: )"  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi there, This is not really a Splunk question; I would start here https://argo-cd.readthedocs.io/en/stable/ cheers, MuS
What have you tried so far?  What were the results?
Are you using the exact same time span each time?  That is, not '-24h', but "2025-06-23:15:10:00". Please share your SPL.
Hi @questionsdaniel  Are you re-running it against the exact same earliest/latest time? Also - the top command can only process the first 50,000 results - so its likely that the first 50,000 events... See more...
Hi @questionsdaniel  Are you re-running it against the exact same earliest/latest time? Also - the top command can only process the first 50,000 results - so its likely that the first 50,000 events returned from your indexers are different each time (arrive in a different order for example) which is why you see different values.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
This is the _raw data.   "requestHeaders":"X-sony-PSD2-CountryCode: GB\r\nX-sony-Request-Correlation-Id: 50977be2-f86c-451a-b318-50b4dfc46b4a\r\nX-sony-Secondary-Id: 1614874131\r\nUser-Agent: Mozil... See more...
This is the _raw data.   "requestHeaders":"X-sony-PSD2-CountryCode: GB\r\nX-sony-Request-Correlation-Id: 50977be2-f86c-451a-b318-50b4dfc46b4a\r\nX-sony-Secondary-Id: 1614874131\r\nUser-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36\r\nX-sony-Channel-Id: OPENBANK\r\nX-sony-TPP-Journey: AISP\r\nX-sony-Locale: GB\r\nToken_Type: ACCESS_TOKEN\r\nX-sony-SoR-CountryCode: GB\r\nx-fapi-interaction-id: 80c0c1c4-ab24-4cc3-9169-4ef8ecfa90ba\r\nX-sony-Tpp-Name: TrueLayer Limited\r\nContent-Type: application/json\r\nX-sony-Global-Channel-Id: OPENBANK\r\nAccept: application/json\r\nX-sony-Client-Id: 5ec4d197-f5f9-432d-8201-e55618ba970e\r\nX-sony-Chnl-CountryCode: GB\r\nX-sony-Chnl-Group-Member: HRFB\r\nX-sony-Tpp-Id: 001580000103UAAAA2\r\nX-sony-Session-Correlation-Id: 4137bff6-c7e2-40f9-a1ca-699f59bcd6ed\r\nX-sony-Source-System-Id: 4910787\r\nX-sony-TPP-URL: https://api.ob.sony.co.uk/obie/open-banking/v4.0/aisp/accounts/50l6Ph5oSYfmYYnARlvAWtNimns1vO1Vo-r/transactions?fromBookingDateTime=2025-05-24T17%3A18%3A55&toBookingDateTime=2025-06-23T23%3A59%3A59\r\nX-sony-GBGF: RBWM\r\nx-sony-consumer-id: OPENBANKING.OBK_MULESOFT_P\r\nX-sony-Username: arielle1@\r\nX-Forwarded-For: 176.34.193.116\r\nX-sony-Client-Name: TrueLayer\r\nX-sony-Software-Id: gdce9LdcLmKHv2MoEtKdPe\r\nX-Amzn-Trace-Id: Root=1-685ae0f4-a3640d152af9aa6aa7092caa;Sampled=0\r\nHost: rbwm-api.sony.co.uk\r\nConnection: Keep-Alive\r\nAccept-Encoding: gzip,deflate\r\nremove-dup-edge-ctrl-headers-rollout-enabled: 1\r\n",
Hi, I'm attempting to write a search where I return a top 10 of a value. However, I am noticing that I return different top 10's when I rerun the search. Does this happen to anyone else? 
Please extract User-Agent field from the below Json event . httpMessage: { [-]      bytes: 2      host: rbwm-api.sony.co.uk      method: GET      path: /kong/originations-loans-uk-orchestration-... See more...
Please extract User-Agent field from the below Json event . httpMessage: { [-]      bytes: 2      host: rbwm-api.sony.co.uk      method: GET      path: /kong/originations-loans-uk-orchestration-prod-proxy/v24/status      port: 443      protocol: HTTP/1.1      requestHeaders: Content-Type: application/json X-SONY-Locale: en_GB X-SONY-Chnl-CountryCode: GB X-SONY-Chnl-Group-Member: HRFB X-SONY-Channel-Id: WEB Cookie: dspSession=hzxVP-NKKzZIN0wfzk85UD0ji7I.*AAJTSQACMDIAAlNLABxvOTRoWElJS2FEU0wrNlMxdTByMGtGN2JYM289AAR0eXBlAANDVFMAAlMxAAI0NQ..* Accept: */* User-Agent: node-fetch/1.0 ( https://github.com/bitn/node-fetch) Accept-Encoding: gzip,deflate Host: rbwm-api.sony.co.uk Connection: close remove-dup-edge-ctrl-headers-rollout-enabled: 1 httpMessage.requestHeaders field values are extracting but only want User-Agent field and values to be extracted from all values. Please help me with this.  
I have only dabbled in this area, but I am pretty sure (not going to place any bets in Las Vegas on it though) that your python code is going to determine where you write the logs.  It would be aweso... See more...
I have only dabbled in this area, but I am pretty sure (not going to place any bets in Las Vegas on it though) that your python code is going to determine where you write the logs.  It would be awesome if it does write to the splunk logs area, but I am pretty sure it does not.  I don't think you specified a location for where you are writing the logs, so my guess is the python script is going to try to write the logs to the same location as your .py script.  Now someone could come along and tell me I am completely wrong, and that is ok, because as I said, this is something I have only done a couple times and it was years ago. If I were going to troubleshoot to find out where the logs are being written, the first thing I would do is spin up my local / dev instance of Splunk.  I really encourage anyone doing development, especially on something as complicated as what you are doing, that you have a dev instance - whether that be on a spare computer or laptop or spin up a local vm, or whatever, but it is really difficult to troubleshoot on a production environment and from an app dev perspective this is also a good practice to have a test environment.  On this test box, you should have access to the command line.  Put your code on that test box and then look and see where the logs are being written.   Sorry I don't have any silver bullet, but my guess is that the log file is "trying" to be written to the same location as your python script, and that means your inputs.conf needs to be pointed there as well or it won't be able to grab it.  (I don't recommend writing your logs to your scripts folder so you probably will want to change its location in the python script)