All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Current version of Splunk Enterprise on Linux supports several flavors of 5.x kernel, but does not seem to support 6.x kernel per the most recent system requirements.   We are planning migration of o... See more...
Current version of Splunk Enterprise on Linux supports several flavors of 5.x kernel, but does not seem to support 6.x kernel per the most recent system requirements.   We are planning migration of our Splunk Infrastructure from Amazon Linux 2 (kernel 5.10.x) to Amazon Linux 2023 (kernel 6.1.x)  due to the approaching Operating System end of life.  Does anyone know if there are plans to support the new Amazon OS by Splunk Enterprise?  
1. It truncates hyphen - before the "This is an Example" now i added ([\r\n+])(.*)(This is an Example).* it captures everthing. But the events are broken into single lines. I have set SHOULD_LINE_MER... See more...
1. It truncates hyphen - before the "This is an Example" now i added ([\r\n+])(.*)(This is an Example).* it captures everthing. But the events are broken into single lines. I have set SHOULD_LINE_MERGE = false.  2. Yes props.conf is on the proper component 3.  i verified using this command       splunk btool inputs list --debug  (there is no other setting that is overwriting LINE_BREAKER) NOTE:  can i use BREAK_ONLY_BEFORE instead of LINE_BREAKER
Custom token script stopped working. Can anyone spot any obvious errors? It worked perfectly from version 6.x - 8.x  I the error "A custom JavaScript error caused an issue loading your dashboard. ... See more...
Custom token script stopped working. Can anyone spot any obvious errors? It worked perfectly from version 6.x - 8.x  I the error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details." The console isnt very helpful.  common.js:1702 Error: Script error for: util/console http://requirejs.org/docs/errors.html#scripterror at makeError (eval at e.exports (common.js:1:1), <anonymous>:166:17) at HTMLScriptElement.onScriptError (eval at e.exports (common.js:1:1), <anonymous>:1689:36) // Tokenize.js require(['jquery', 'underscore', 'splunkjs/mvc', 'util/console'], function($, _, mvc, console) { function setToken(name, value) { var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } // Main $('.dashboard-body').on('click', '[data-on-class],[data-off-class],[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); console.log("Inside the click"); var target = $(e.currentTarget); console.log("here"); console.log("target.data('on-class')=" + target.data('on-class')); var cssOnClass= target.data('on-class'); var cssOffClass = target.data('off-class'); if (cssOnClass) { $("." + cssOnClass).attr('class', cssOffClass); target.attr('class', cssOnClass); } var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { var tokens = unsetTokenName.split(","); var arrayLength = tokens.length; for (var i = 0; i < arrayLength; i++) { setToken(tokens[i], undefined); //Do something } //setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value == null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); });  
Using just Splunk, you could do an ugly hack and send to another HF instance on which you'd force input data to go through typing queue again, not skip straight to indexing queue. But this is a very ... See more...
Using just Splunk, you could do an ugly hack and send to another HF instance on which you'd force input data to go through typing queue again, not skip straight to indexing queue. But this is a very unusual and unintuitive design. You might be able to use Cribl but I'm not sure about that.
1. Are you sure the LINE_BREAKER is right? I mean - the capture group in the LINE_BREAKER will be treated as the line breaker and will be removed from the stream. Are you sure you want to cut this mu... See more...
1. Are you sure the LINE_BREAKER is right? I mean - the capture group in the LINE_BREAKER will be treated as the line breaker and will be removed from the stream. Are you sure you want to cut this much? Not more, not less? Also you usually include \r and/or \n explicitly in your line breaker definition. Otherwise the results might not be what you expect. 2. Are you sure you're putting your props.conf on the proper component in your environment? 3. Did you verify with btool that there is no other setting overwriting your line breaker?
Hi @PickleRick , Do you know if there is any possible method of sending data from a splunk HF to a 3rd party endpoint that also maintains the metadata
You can do conditional lookup using the eval form of lookup https://docs.splunk.com/Documentation/Splunk/9.3.2/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.2C.26lt.3Bjs... See more...
You can do conditional lookup using the eval form of lookup https://docs.splunk.com/Documentation/Splunk/9.3.2/SearchReference/ConditionalFunctions#lookup.28.26lt.3Blookup_table.26gt.3B.2C.26lt.3Bjson_object.26gt.3B.2C.26lt.3Bjson_array.26gt.3B.29 | eval LookupResult=if(host_type="application", lookup("clientlist.csv", json_object("hostname", host), json_array("clientcode")), null()) You will get back a field called LookupResult like  {"clientcode":"abc"} and you can then extract the value abc from the result.  
---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 Comput... See more...
---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 ComputerName/-user: Ann/King Windows NT version 6.2, build no. 9200 /10872/6241785241 -> Loading program ---------------------------------------------------------------------------------------------------- ---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 ComputerName/-user: James/Bond Windows NT version 6.2, build no. 9200 /10872/6241785241 -> Start APL (pid 8484) ---------------------------------------------------------------------------------------------------- ---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 ComputerName/-user: Martin/King Windows NT version 6.2, build no. 9200 /10872/6241785241 -> Initialising external processes ---------------------------------------------------------------------------------------------------- I am trying to break events at "This is an Example"  [mysourcetype] TIME_FORMAT = %Y-%m-%d/%H:%M:%S TIME_PREFIX = Date\/time:\s+ TZ = US/Eastern LINE_BREAKER = (.*)(This is An Example).* SHOULD_LINEMERGE = false This works when i test in "Add Data" but it is not working under props.conf. All the lines are merged into one event. What is the issue in this?
There are some additional issues here (feel free to ignore my comments since they are a bit advanced and might be simply overkill if your case is relatively small and simple). 1. You're not using fi... See more...
There are some additional issues here (feel free to ignore my comments since they are a bit advanced and might be simply overkill if your case is relatively small and simple). 1. You're not using field extractions. You're extracting fields "manually" within your search. For a simple case it might work relatively well but it usually helps a lot if you have extractions defined in configuration - it lets you search for particular fields way faster than having to parse every single event and verifying if the value matches. 2. Your signal to noise ratio is relatively low - you have quite a lot of text which doesn't bring any additional value to your data - you don't have any dynamic fields so you don't have to dynamically name them and such. You could "squeeze" your events to leave only relevant values in some more structured but less verbose format. Again - if you just have a few hundred bytes each minute, that's probably not worth the work you'd need to put into it but if you have several thousands of hosts monitored this way, that could be worth savings on license costs. 3. And the most advanced topic here - you could prepare your data properly and ingest it to a metrics index. This way each event consumes a constant 160 bytes of license but most importantly - searching and doing statistical analyses over metrics indexes is way faster than on normal event indexes (but at the same time it's done a bit differently so you have to learn to use new commands like mstats or mpreview).
You can't easily do that. I'm not even sure you can to that at all. The problem is that the data being sent over the syslog output is simply the raw event, optionally(?) prepended by the syslog head... See more...
You can't easily do that. I'm not even sure you can to that at all. The problem is that the data being sent over the syslog output is simply the raw event, optionally(?) prepended by the syslog header. So if you wanted to include the metadata you'd have to include it in the raw event. But even if you managed to do this on a global level (like some catch-all sourcetype definition and a transform adding the metadata to the event), the same event would be sent to your splunktcp output as well which would most probably mean that the event is unusable in this format.
Well, rsyslog configuration can be as simple as *.* /var/log/all.log but can also span into several hundreds of files, with complicated processing rules and sending data to multiple destinations an... See more...
Well, rsyslog configuration can be as simple as *.* /var/log/all.log but can also span into several hundreds of files, with complicated processing rules and sending data to multiple destinations and such. Rsyslog recently had a major overhaul of its docs page  https://www.rsyslog.com/doc/v8-stable/index.html (the old docs were a bit confusing at times) and it has a relatively responsive mailing list https://lists.adiscon.net/mailman/listinfo/rsyslog
I'm not 100% sure about that. NFR licenses changed a bit over time. As far as I remember, Partner NFR's used to support distributed environments and now they don't. So the terms regarding multiple us... See more...
I'm not 100% sure about that. NFR licenses changed a bit over time. As far as I remember, Partner NFR's used to support distributed environments and now they don't. So the terms regarding multiple uses could also have changed. The main thing is however you're not supposed to use Partner NFRs for production data. It's only meant for lab/dev/testing/demo and such.
Technically, you could do a common list of CA's and bind them to all inputs (or just make one input with all those CAs) but I suppose you might not want that.  In that case you just bind one CA to on... See more...
Technically, you could do a common list of CA's and bind them to all inputs (or just make one input with all those CAs) but I suppose you might not want that.  In that case you just bind one CA to one input and another CA to another input. You can then even limit access to just allowed SANs.
I am trying to figure out how to include a lookup in my search, but only some records. My current search is below. My company has two issues: We do not log app version anywhere easy to grab, so I n... See more...
I am trying to figure out how to include a lookup in my search, but only some records. My current search is below. My company has two issues: We do not log app version anywhere easy to grab, so I need to have this pulled via rex. We manually maintain a list of clients (some are on an old version and we don't populate the "client" field for them) and what host they are on. Some clients have both their application and DB on the same host, so my search below results in some weird duplicates where the displayName is listed twice for a single record in my result set (a field containing two values somehow). I want the lookup to only include records where the "host_type" is "application", not "db". Here is my search:   `Environments(PRODUCTION)` sourcetype=appservice "updaterecords" AND "version" | eval host = lower(host) | lookup clientlist.csv hostname as host, OUTPUT clientcode as clientCode | eval displayName = IF(client!="",client,clientCode) | rex field=_raw "version: (?<AppVersion>.*)$" | eval VMVersion = replace(AppVersion,"release/","") | eval CaptureDate=strftime(_time,"%Y-%m-%d") | dedup clientCode | table displayName,AppVersion,CaptureDate    I did try including host_type right after "..hostname as host.." and using a |where clause later, but that did not work.
Hello Splunk Community,   i monitor the audit.log on RHEL8. As soon as I generate a specific log entry locally, I can find this log entry through my defined search query in Splunk. However, if a ... See more...
Hello Splunk Community,   i monitor the audit.log on RHEL8. As soon as I generate a specific log entry locally, I can find this log entry through my defined search query in Splunk. However, if a few hours pass, I can no longer find it with the same search query. Of course, I adjust the time settings accordingly. First, I search in real-time (last 30 minutes), then I switch to, for example, Today or the last 4 hours. I have noticed that this happens with searches that include "transaction msg maxspan=5m". I want to see all the related transactions. When I have the command transaction msg maxspan=5m in my search, I find all the related transactions in real-time. After a few hours, I no longer get any hits with the same search query. Only when I remove the transaction command from the search do I see the entries again, but then I don't see as much information as before. Nothing changes if i switch to transaction msg maxevent=3. Do I possibly have a wrong configuration of my environment here, or do I need to adjust something? Thanks in advance. Search Query: index="sys_linux" sourcetype="linux_audit" | transaction msg maxspan=5m | search type=SYSCALL (auid>999 OR auid=0) auid!=44444 auid!=4294967295 comm!=updatedb comm!=ls comm!=bash comm!=find comm!=crond comm!=sshd comm!="(systemd)" | rex field=msg "audit\((?P<date>[\d]+)" | convert ctime(date) | sort by date | table date, type, comm, uid, auid, host, name
For security, Splunk UFs default to not listening on a management port.  You must explicitly enable it.
I could not see anything in the partner general terms that prohibits the use of AWS (https://www.splunk.com/en_us/legal/splunk-partner-general-terms.html), but you should have a contact in the Splunk... See more...
I could not see anything in the partner general terms that prohibits the use of AWS (https://www.splunk.com/en_us/legal/splunk-partner-general-terms.html), but you should have a contact in the Splunk Build Program who can give you a more authoritative answer than this community forum, where the members are volunteers.
Yes, it is possible.  You cannot, however, deploy the same license in more than one Splunk environment.  IOW, you can't use in AWS the same license you are using on-prem.
You can also set up the search that generates done_sending_email to run once a day before the main search executes. This way the done_sending_email.csv file will be cleared and the main search will s... See more...
You can also set up the search that generates done_sending_email to run once a day before the main search executes. This way the done_sending_email.csv file will be cleared and the main search will send out emails to people every day.
Hi, I am looking into the possibiliy of deploying a private splunk instance for integration testing in AWS, can anyone tell is it possible to install an NFR licence on an instance deployed in AWS? ... See more...
Hi, I am looking into the possibiliy of deploying a private splunk instance for integration testing in AWS, can anyone tell is it possible to install an NFR licence on an instance deployed in AWS?   Thanks