All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What product/service are you talking about? Splunk Enterprise doesn't have the settings you describe. Is it Observability?
Wow, the expected result popped up !!!   Thanks !!!!, I will do  some testing
For duration? I'm all for strftime for formatting points in time. But for longer durations you'll get strange results (duration of year 1971?). Also timezone settings can wreak havoc with accuracy o... See more...
For duration? I'm all for strftime for formatting points in time. But for longer durations you'll get strange results (duration of year 1971?). Also timezone settings can wreak havoc with accuracy of the results.
Will the patched version of the MLTK work with ES 7.3.2?   https://advisory.splunk.com/advisories/SVD-2024-1102
Do you mean you want to concatenate host values from all events collectively, not just from each individual event?  If that's all you want, you can do   <your_search> | stats values(host) AS host |... See more...
Do you mean you want to concatenate host values from all events collectively, not just from each individual event?  If that's all you want, you can do   <your_search> | stats values(host) AS host | eval newfield = mvjoin(host, ",")   If you want a new field alongside other fields in events, use eventstats instead of stats   <your_search> | eventstats values(host) AS newfield | eval newfield = mvjoin(newfield, ",")    
Like @gcusello says, matching backslash is tricky.  This is because backslash is used as an escape character so special characters can be used as literal.  This applies to backslash itself as well.  ... See more...
Like @gcusello says, matching backslash is tricky.  This is because backslash is used as an escape character so special characters can be used as literal.  This applies to backslash itself as well.  This needs to be taken into consideration whenever an interpreter/compiler uses backslash as an escape character. When you run rex (or any function that uses regex) in a search command, two interpreters act on the string in between double quotes: the regex engine and SPL interpreter.  As such, to match two consecutive backslashes, you need 8 backslashes instead of 4.  Try this:   | makeresults format=csv data="myregex C:\\\\Windows\\\\System32\\\\test\\\\ C:\\\\\\\\Windows\\\\\\\\System32\\\\\\\\test\\\\\\\\" | eval parent = "C:\\\\Windows\\\\System32\\\\test\\\\" | eval match_or_not = if(match(parent, myregex), "yes", "no")   The result is match_or_not myregex parent no C:\\Windows\\System32\\test\\ C:\\Windows\\System32\\test\\ yes C:\\\\Windows\\\\System32\\\\test\\\\ C:\\Windows\\System32\\test\\ This test illustrates the same thing:   | makeresults format=csv data="parent C:\\\\Windows\\\\System32\\\\test\\\\" | eval match_or_not1 = if(match(parent, "C:\\\\\\\\Windows\\\\\\\\System32\\\\\\\\test\\\\\\\\"), "yes", "no") | eval match_or_not2 = if(match(parent, "C:\\\\Windows\\\\System32\\\\test\\\\"), "yes", "no")   match_or_not1 match_or_not2 parent yes no C:\\Windows\\System32\\test\\ If you look around, SPL is not the only interpreter that interprets strings in between double quotes.  For example, in order to produce your test string "C:\\Windows\\System32\\test\\" using echo command in shell, you use   % echo "C:\\\\\\Windows\\\\\\System32\\\\\\\\test\\\\\\" # ^6x ^6x ^7x ^6x C:\\Windows\\System32\\test\\   I will leave it as homework to figure out why one segment needs 7 backslashes.
Hi @Amoreuser, Based on what you described, there seems to be an config issue in your alert setup. If your threshold is set to 90 but alerts are triggering at 89.1, you may want to check a few things... See more...
Hi @Amoreuser, Based on what you described, there seems to be an config issue in your alert setup. If your threshold is set to 90 but alerts are triggering at 89.1, you may want to check a few things: First, verify that your alert condition is exactly set to "Above" and not "Above or Equal". Second, take a look at your search query to make sure there's no unintended data processing affecting the values. If you're working with decimal values, you might want to add a round() function in your search to ensure more precise threshold control. Could you share your search query so I can help identify the issue? If this Helps, Please Upvote.
https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/Systemrequirements (Dec 3rd 2024) showing support of amznlx2023 for x86 (but not for ARM). But SPLUNK cloud latest rel is 9.3.2408 (che... See more...
https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/Systemrequirements (Dec 3rd 2024) showing support of amznlx2023 for x86 (but not for ARM). But SPLUNK cloud latest rel is 9.3.2408 (checked on Dec16,2024)
Hello, I just wanted to know more detailed information so I opened the case. About Alert settings. I set  Threshold '90' , Trigger 'Immediately'  and Alert when ' Above '  If the above settings a... See more...
Hello, I just wanted to know more detailed information so I opened the case. About Alert settings. I set  Threshold '90' , Trigger 'Immediately'  and Alert when ' Above '  If the above settings are Does the alarm occur from 90.1? I remember in the beginning, if I set it to 90, it was registered as 89. It's currently set up that way I would like to know if an alert is occurring at 89.1. In case an alarm occurs at 89.1, I need to fix it as soon as possible Please reply   Thank you !!!  
Sorry for not being so clear, here is a description of what was done: I want to extract fields in HF before sending to Splunk Cloud. transforms.conf [field_extract_username] SOURCE_KEY = _raw ... See more...
Sorry for not being so clear, here is a description of what was done: I want to extract fields in HF before sending to Splunk Cloud. transforms.conf [field_extract_username] SOURCE_KEY = _raw REGEX = (\susername\s\[(?P<user>.+?)\]\s) FORMAT = user::$1 props.conf [keycloak] DATETIME_CONFIG = INDEXED_EXTRACTIONS = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = false SHOULD_LINEMERGE = true REPORT-field_extract = field_username EXTRACT-username = \susername\s\[(.+?)\]\s EXTRACT-user = (\susername\s\[(?P<user>.+?)\]\s) EXTRACT-username and EXTRACT-user I created as a test after REPORT-field_extract extracted the user field. _raw log: { "log": "stdout F {\"timestamp\":\"%s\",\"sequence\":%d,\"loggerClassName\":\"org.jboss.logging.Logger\",\"loggerName\":\"br.com.XXXXXX. keycloak.login.pf.clients.CustomerLoginClient\",\"level\":\"INFO\",\"message\":\"CustomerLoginClient.fetchValidateLogin - Processed - username [XX157118577] clientId [https://www.XXXX.com/app] took [104ms]\",\"threadName\":\"executor-thread-3577\",\"threadId\":1XXXXX73,\"mdc\":{\"dt.entity.process_group\":\"PROC ESS_GROUP-DXXA014C1XXXX7EC\",\"dt.host_group.id\":\"prd\",\"dt.entity.host_group\":\"HOST_GROUP-46FAFFBA838D4E81\", \"dt.entity.host\":\"HOST-971DXXXXXXX0F72E\",\"dt.entity.process_group_instance\":\"PROCESS_GROUP_INSTANCE-60C0A631 DB5AB172\"},\"ndc\":\"\",\"hostName\":\"keycloak-XXXXX-X\",\"processName\":\"QuarkusEntryPoint\",\"processId\":1}", "source": "/var/log/containers/keycloak-XXXXX-0_XXXXXX_keycloak-814935ba7b1d4XXXXXXXXeb8d4dfc51d27283a257c4a96526eb.log", "host": "[\"keycloak-XXXXX-0\"]", "type": "-", "environment": "prod" }
Hi... this is aging well but I could really use some help.  When you mention summary Indexing to get historical events, what did you mean?  TIA, -V
Please describe the problem you are having without using the phrase "it does not work" as that tells us nothing about what is wrong. Heavy forwarders parse data exactly the same way indexers do so a... See more...
Please describe the problem you are having without using the phrase "it does not work" as that tells us nothing about what is wrong. Heavy forwarders parse data exactly the same way indexers do so any props and transforms you would use on an indexer should work on a HF.  If the data passes through more than one HF then only the first one does the parsing.  Also, data sent via HEC to the /events endpoint is not parsed at all. Make sure the props are in the right stanza (the stanza name matches the incoming sourcetype or starts with "source::" and matches the source name or starts with "host::" and matches the sending host's name).  Be sure to test regular expressions (I like to use rege101.com, but it's not perfect) before using them.
This is my first time using splunk cloud. And I'm trying to perform field extraction directly in the heavy forwarder before indexing the data. I created REPORT and TRANSFORM in props.conf with trans... See more...
This is my first time using splunk cloud. And I'm trying to perform field extraction directly in the heavy forwarder before indexing the data. I created REPORT and TRANSFORM in props.conf with transform.conf configured using regex expression tested and functional in splunk Cloud through field extract, but it does not work when trying to use HF Are there any limitations on data extraction when using heavy forwarder to Splunk Cloud?
Although this problem is different to the OP's problem, there is another way to handle multiple date formats, e.g. by using coalesce and the multiple date formats in descending order of probability ... See more...
Although this problem is different to the OP's problem, there is another way to handle multiple date formats, e.g. by using coalesce and the multiple date formats in descending order of probability | eval my_time=coalesce( strptime(genZeit, "%Y-%m-%dT%H:%M:%S%:z"), strptime(genZeit, "%Y-%m-%dT%H:%M:%S.%3N%:z"))  
@PickleRick wrote: 2. ... you should rather use convert() function, not strftime). ... Out of interest - why? I much prefer strftime - it can be used with eval/fieldformat. convert cannot be ... See more...
@PickleRick wrote: 2. ... you should rather use convert() function, not strftime). ... Out of interest - why? I much prefer strftime - it can be used with eval/fieldformat. convert cannot be used with fieldformat either.  
Sorry for the delay, thanks for the response. Does not show duration information. Countrie Duracion Uruguay   Uruguay   Uruguay   Uruguay   Denmark   China   Chile ... See more...
Sorry for the delay, thanks for the response. Does not show duration information. Countrie Duracion Uruguay   Uruguay   Uruguay   Uruguay   Denmark   China   Chile   Spain   Uruguay   Spain   Spain   Spain   Uruguay   Spain   Spain   Uruguay   Spain    
First and foremost - you should not configure inputs on a search head. Set up a separate HF with those inputs and only use SHs for searching. There might be more issues with your overall setup that ... See more...
First and foremost - you should not configure inputs on a search head. Set up a separate HF with those inputs and only use SHs for searching. There might be more issues with your overall setup that we don't know about.
While it might "work" it's definitely a bad idea to handle the main event's time this way. The _time field is the most important time field associated with an event and - very very importantly - it's... See more...
While it might "work" it's definitely a bad idea to handle the main event's time this way. The _time field is the most important time field associated with an event and - very very importantly - it's the basic field for initial event filtering so just assigning "something" to it and then later handling time in search time is very unusual, confusing and ineffective performance-wise.
Are you asking how to configure Telegraf to poll external devices using SNMP? That's out of scope of this forum since it has nothing to do with Splunk as such. The addon you listed is for ingesting m... See more...
Are you asking how to configure Telegraf to poll external devices using SNMP? That's out of scope of this forum since it has nothing to do with Splunk as such. The addon you listed is for ingesting metrics data from Telegraf (already received by its inputs) to Splunk.
Ok. Do you mean that you redefined the Datamodel itself or just changed the acceleratio  parameters? And are you talking about the dataset definitions or the summarized data in context of it being no... See more...
Ok. Do you mean that you redefined the Datamodel itself or just changed the acceleratio  parameters? And are you talking about the dataset definitions or the summarized data in context of it being not in sync? How did you modify those configurations? Do you have the same settings defines within an app pushed from the deployer?