All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Amoreuser, Based on what you described, there seems to be an config issue in your alert setup. If your threshold is set to 90 but alerts are triggering at 89.1, you may want to check a few things... See more...
Hi @Amoreuser, Based on what you described, there seems to be an config issue in your alert setup. If your threshold is set to 90 but alerts are triggering at 89.1, you may want to check a few things: First, verify that your alert condition is exactly set to "Above" and not "Above or Equal". Second, take a look at your search query to make sure there's no unintended data processing affecting the values. If you're working with decimal values, you might want to add a round() function in your search to ensure more precise threshold control. Could you share your search query so I can help identify the issue? If this Helps, Please Upvote.
https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/Systemrequirements (Dec 3rd 2024) showing support of amznlx2023 for x86 (but not for ARM). But SPLUNK cloud latest rel is 9.3.2408 (che... See more...
https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/Systemrequirements (Dec 3rd 2024) showing support of amznlx2023 for x86 (but not for ARM). But SPLUNK cloud latest rel is 9.3.2408 (checked on Dec16,2024)
Hello, I just wanted to know more detailed information so I opened the case. About Alert settings. I set  Threshold '90' , Trigger 'Immediately'  and Alert when ' Above '  If the above settings a... See more...
Hello, I just wanted to know more detailed information so I opened the case. About Alert settings. I set  Threshold '90' , Trigger 'Immediately'  and Alert when ' Above '  If the above settings are Does the alarm occur from 90.1? I remember in the beginning, if I set it to 90, it was registered as 89. It's currently set up that way I would like to know if an alert is occurring at 89.1. In case an alarm occurs at 89.1, I need to fix it as soon as possible Please reply   Thank you !!!  
Sorry for not being so clear, here is a description of what was done: I want to extract fields in HF before sending to Splunk Cloud. transforms.conf [field_extract_username] SOURCE_KEY = _raw ... See more...
Sorry for not being so clear, here is a description of what was done: I want to extract fields in HF before sending to Splunk Cloud. transforms.conf [field_extract_username] SOURCE_KEY = _raw REGEX = (\susername\s\[(?P<user>.+?)\]\s) FORMAT = user::$1 props.conf [keycloak] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = false SHOULD_LINEMERGE = true REPORT-field_extract = field_username EXTRACT-username = \susername\s\[(.+?)\]\s EXTRACT-user = (\susername\s\[(?P<user>.+?)\]\s) EXTRACT-username and EXTRACT-user I created as a test after REPORT-field_extract extracted the user field. _raw log: { "log": "stdout F {\"timestamp\":\"%s\",\"sequence\":%d,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"br.com.XXXXXX. keycloak.login.pf.clients.CustomerLoginClient\",\"level\":\"INFO\",\"message\":\"CustomerLoginClient.fetchValidateLogin - Processed - username [XX157118577] clientId [https://www.XXXX.com/app] took [104ms]\",\"threadName\":\"executor-thread-3577\",\"threadId\":1XXXXX73,\"mdc\":{\"dt.entity.process_group\":\"PROC ESS_GROUP-DXXA014C1XXXX7EC\",\"dt.host_group.id\":\"prd\",\"dt.entity.host_group\":\"HOST_GROUP-46FAFFBA838D4E81\", \"dt.entity.host\":\"HOST-971DXXXXXXX0F72E\",\"dt.entity.process_group_instance\":\"PROCESS_GROUP_INSTANCE-60C0A631 DB5AB172\"},\"ndc\":\"\",\"hostName\":\"keycloak-XXXXX-X\",\"processName\":\"QuarkusEntryPoint\",\"processId\":1}", "source": "/var/log/containers/keycloak-XXXXX-0_XXXXXX_keycloak-814935ba7b1d4XXXXXXXXeb8d4dfc51d27283a257c4a96526eb.log", "host": "[\"keycloak-XXXXX-0\"]", "type": "-", "environment": "prod" }
Hi... this is aging well but I could really use some help.  When you mention summary Indexing to get historical events, what did you mean?  TIA, -V
Please describe the problem you are having without using the phrase "it does not work" as that tells us nothing about what is wrong. Heavy forwarders parse data exactly the same way indexers do so a... See more...
Please describe the problem you are having without using the phrase "it does not work" as that tells us nothing about what is wrong. Heavy forwarders parse data exactly the same way indexers do so any props and transforms you would use on an indexer should work on a HF.  If the data passes through more than one HF then only the first one does the parsing.  Also, data sent via HEC to the /events endpoint is not parsed at all. Make sure the props are in the right stanza (the stanza name matches the incoming sourcetype or starts with "source::" and matches the source name or starts with "host::" and matches the sending host's name).  Be sure to test regular expressions (I like to use rege101.com, but it's not perfect) before using them.
This is my first time using splunk cloud. And I'm trying to perform field extraction directly in the heavy forwarder before indexing the data. I created REPORT and TRANSFORM in props.conf with trans... See more...
This is my first time using splunk cloud. And I'm trying to perform field extraction directly in the heavy forwarder before indexing the data. I created REPORT and TRANSFORM in props.conf with transform.conf configured using regex expression tested and functional in splunk Cloud through field extract, but it does not work when trying to use HF Are there any limitations on data extraction when using heavy forwarder to Splunk Cloud?
Although this problem is different to the OP's problem, there is another way to handle multiple date formats, e.g. by using coalesce and the multiple date formats in descending order of probability ... See more...
Although this problem is different to the OP's problem, there is another way to handle multiple date formats, e.g. by using coalesce and the multiple date formats in descending order of probability | eval my_time=coalesce( strptime(genZeit, "%Y-%m-%dT%H:%M:%S%:z"), strptime(genZeit, "%Y-%m-%dT%H:%M:%S.%3N%:z"))  
@PickleRick wrote: 2. ... you should rather use convert() function, not strftime). ... Out of interest - why? I much prefer strftime - it can be used with eval/fieldformat. convert cannot be ... See more...
@PickleRick wrote: 2. ... you should rather use convert() function, not strftime). ... Out of interest - why? I much prefer strftime - it can be used with eval/fieldformat. convert cannot be used with fieldformat either.  
Sorry for the delay, thanks for the response. Does not show duration information. Countrie Duracion Uruguay   Uruguay   Uruguay   Uruguay   Denmark   China   Chile ... See more...
Sorry for the delay, thanks for the response. Does not show duration information. Countrie Duracion Uruguay   Uruguay   Uruguay   Uruguay   Denmark   China   Chile   Spain   Uruguay   Spain   Spain   Spain   Uruguay   Spain   Spain   Uruguay   Spain    
First and foremost - you should not configure inputs on a search head. Set up a separate HF with those inputs and only use SHs for searching. There might be more issues with your overall setup that ... See more...
First and foremost - you should not configure inputs on a search head. Set up a separate HF with those inputs and only use SHs for searching. There might be more issues with your overall setup that we don't know about.
While it might "work" it's definitely a bad idea to handle the main event's time this way. The _time field is the most important time field associated with an event and - very very importantly - it's... See more...
While it might "work" it's definitely a bad idea to handle the main event's time this way. The _time field is the most important time field associated with an event and - very very importantly - it's the basic field for initial event filtering so just assigning "something" to it and then later handling time in search time is very unusual, confusing and ineffective performance-wise.
Are you asking how to configure Telegraf to poll external devices using SNMP? That's out of scope of this forum since it has nothing to do with Splunk as such. The addon you listed is for ingesting m... See more...
Are you asking how to configure Telegraf to poll external devices using SNMP? That's out of scope of this forum since it has nothing to do with Splunk as such. The addon you listed is for ingesting metrics data from Telegraf (already received by its inputs) to Splunk.
Ok. Do you mean that you redefined the Datamodel itself or just changed the acceleratio  parameters? And are you talking about the dataset definitions or the summarized data in context of it being no... See more...
Ok. Do you mean that you redefined the Datamodel itself or just changed the acceleratio  parameters? And are you talking about the dataset definitions or the summarized data in context of it being not in sync? How did you modify those configurations? Do you have the same settings defines within an app pushed from the deployer?  
Wait a second. Splunkbase is a channel for application distribution. While in a standalone server setups you can pull an app directly from Splunkbase it's not meant to be your deployment server. Tr... See more...
Wait a second. Splunkbase is a channel for application distribution. While in a standalone server setups you can pull an app directly from Splunkbase it's not meant to be your deployment server. Trying to pull some tricks with application ID and renaming "in place" is a relatively ugly solution. Why not just release a new app and provide a docs for migration between those "versions"?
Maybe this app https://splunkbase.splunk.com/app/6368 helps you to see what you have in props.conf in your search context?
Hi as you have renamed and changed AppId then this is totally new application without any reference into the old one. There is no automatic way how you could migrate those all KOs from old app and e... See more...
Hi as you have renamed and changed AppId then this is totally new application without any reference into the old one. There is no automatic way how you could migrate those all KOs from old app and especially from user private folders. If those installations are in onprem then you could use e.g. this script/solution https://community.splunk.com/t5/Dashboards-Visualizations/Can-we-move-the-saved-searches-or-knowledge-objects-created/m-p/672741/highlight/true#M55102 You could try to modify this script to work remotely with Splunk Cloud, but it needs some work and I don’t be sure that can you even do it? I have no experience how to remove app from splunkbase. Probably it can do with service request? At least you could update the old app and tell that everyone should use your new one.  r. Ismo
Actually TLS mutual authentication is done by the openssl library and can be configured on an intermediate UF as well (did it myself several times on s2s inputs). It's just that http input isn't off... See more...
Actually TLS mutual authentication is done by the openssl library and can be configured on an intermediate UF as well (did it myself several times on s2s inputs). It's just that http input isn't officially supported on UF (any documentation about HEC mentions only Splunk Enterprise or Cloud). So in case anything goes sideways first thing you'll hear from support is "use a HF instead of UF".
Basically you shouldn’t migrate both os and splunk at the same time. Just select which one you do first and after you have finalized it and check couple of days that everything is ok, then do the seco... See more...
Basically you shouldn’t migrate both os and splunk at the same time. Just select which one you do first and after you have finalized it and check couple of days that everything is ok, then do the second migration. Of course if you have new host where to migrate then those os can be migrated earlier and just migrate splunk into those. Again you could migrate splunk before node migration or after it, but don’t try it the same time (e.g. new hosts have newer version). Here is how I have done it earlier https://community.splunk.com/t5/Splunk-Enterprise/Migration-of-Splunk-to-different-server-same-platform-Linux-but/m-p/538062 r. Ismo
Overall logic of your search is flawed. You firstly remove a lot of data with dedup and then try to stats over hugely incomplete data set. What is it you're trying to do (in your own words, without ... See more...
Overall logic of your search is flawed. You firstly remove a lot of data with dedup and then try to stats over hugely incomplete data set. What is it you're trying to do (in your own words, without SPL)?