All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Let's try one last time! In props.conf, the source stanza actually follows a modified regex syntax. (See my other comment from 19-nov-2024 or just read props.conf.spec). So the following should work... See more...
Let's try one last time! In props.conf, the source stanza actually follows a modified regex syntax. (See my other comment from 19-nov-2024 or just read props.conf.spec). So the following should work [source::/opt/splunk/var/log/syslog/*lx0001*/*] or perhaps [source::/opt/splunk/var/log/syslog/*lx0001*/*.log] But re-reading the original post, @jonatanjosefson was actually trying to set some property for all syslog events corresponding to a particular host name pattern - you can do that using a host stanza instead of a source stanza: host::*lx0001* To me, this seems easier to understand and maintain, as it will work even if the directory structure of the syslog files changes over time. It depends only on the hostname, and not the file name or location.      
Typically most of the fields Splunk uses are so-called "search-time fields" - Splunk parses them out during the search. Here you're extracting the fqdn as an indexed fields which means you're parsing... See more...
Typically most of the fields Splunk uses are so-called "search-time fields" - Splunk parses them out during the search. Here you're extracting the fqdn as an indexed fields which means you're parsing it out during indexing and writing an additional field into the metadata files along the data itself. This has its cons (like immutability - once you've extracted it, you cannot fix it in case something went wrong or additional space usage) and usually does not have many pros. The most obvious advantage of having an indexed field is speed if you're using summaries on this field - then you can use the tstats command and it's lightning fast compared to normal event search and summarizing with the normal stats command. But other than that there are few cases when indexed fields are called for. But that's really an advanced topic. 4. Yes, if you plan to restrict access to data, multiple indexes is indeed the way to go. 5. When you're using INDEX_EVAL to set a value with a normal = assignment, if that field already has a value, a new value is _added_ to that field creating a multivalued field. If you use := assignment, the old value - if present - is overwritten. I'm not 100% sure how would Splunk treat a multivalued index field (well, index is not technically a field stored in metadata along the event itself, it's just where the event is written). So just to be on the safe side I'd use := 6. You're doing a lot of json parsing and rendering and doing lookups on csv (which will be done linearly) so that might have a noticeable performance impact on your Splunk instance if you have a lot of data. You simply might be able to both write it in an easier maintainable way and have it perform better if you implemented this logic one step earlier - in your syslog daemon.
For the source stanza, Splunk uses regular expressions that are PCRE (Perl Compatible Regular Expressions). From props.conf.spec **[source::<source>] and [host::<host>] stanza match language:** Ma... See more...
For the source stanza, Splunk uses regular expressions that are PCRE (Perl Compatible Regular Expressions). From props.conf.spec **[source::<source>] and [host::<host>] stanza match language:** Match expressions must match the entire name, not just a substring. Match expressions are based on a full implementation of Perl-compatible regular expressions (PCRE) with the translation of "...", "*", and "." Thus, "." matches a period, "*" matches non-directory separators, and "..." matches any number of any characters. Also from props.conf.spec When setting a [<spec>] stanza, you can use the following regex-type syntax: ... recurses through directories until the match is met or equivalently, matches any number of characters. * matches anything but the path separator 0 or more times. The path separator is '/' on unix, or '\' on Windows. Intended to match a partial or complete directory or filename. | is equivalent to 'or' ( ) are used to limit scope of |. \\ = matches a literal backslash '\'. So for mylog_*  you could specify source::.../mylog_* It's been a few years on this one, so hope I am right this time!
You can't use standard built-in mechanisms to do that directly because they use the same result set. You can try to use some walkarounds as @marnall showed The other way is to either write your own ... See more...
You can't use standard built-in mechanisms to do that directly because they use the same result set. You can try to use some walkarounds as @marnall showed The other way is to either write your own custom commend (which is cumbersome) or group your results into single mailable "items", render the mail body on your own and use the map command to call sendresults (you need to install that app first of course).  
My example is more like pseudo-code than something you could paste into a dashboard.  No doubt there are many blanks to be filled in. JSON input types are in the manual at https://docs.splunk.com/Do... See more...
My example is more like pseudo-code than something you could paste into a dashboard.  No doubt there are many blanks to be filled in. JSON input types are in the manual at https://docs.splunk.com/Documentation/Splunk/9.3.2/DashStudio/inputConfig#Input_configuration_options_available_in_the_visual_editor
Hi Marnall, I curl into my tenant and it looks like none of the IPs that I asked to be whitelisted are getting through. That could be the reason this issue having. Thank you for the reply.
One thing you could do is save your search as an alert that triggers rapidly, which uses a different recipient email value that is supplied by a lookup table and iterates using another lookup table c... See more...
One thing you could do is save your search as an alert that triggers rapidly, which uses a different recipient email value that is supplied by a lookup table and iterates using another lookup table containing the email addresses that have already received an email. MAIN SEARCH: <yoursearch> | search [| inputlookup list_all_emails.csv | table action | search NOT [| inputlookup done_sending_email.csv | table action | dedup action] | head 1] | outputlookup done_sending_email.csv append=true You can generate the done_sending_email csv with this search: | makeresults | eval action = "randomfillervaluethatisnotanemail" | outputlookup done_sending_email.csv And generate the list_all_emails.csv with this search: <yoursearch> | dedup action | table action | outputlookup list_all_emails.csv Once the 2 lookup tables are generated, run the main search a few times to see if it iterates through the results for the first few users. If it works, then regenerate the done_sending_email.csv lookup and then save the main search it as an alert. In the Alert settings, scroll down to "When Triggered", then set the To: field to be $result.action$, and then set the rest of the "send email" options to your preference. Set the cron schedule to be something rapid like * * * * * or */5 * * * *, then save the alert. You can then wait as your alert sends a different email to a different user on each execution, containing only the results relevant to them.  Once the done_sending_email.csv and list_all_emails.csv lookup tables are almost the same size, (done_sending_email.csv will be +1 bigger if it has the filler value) then the emails are all sent out. You can then disable the alert, or you can empty the done_sending_email.csv file if you'd like to send another wave of emails.
Thank you! I'm working on reproducing this in json format for Dashboard Studio, and keep getting an error that the input myast have a 'type' specified... any guidance on what that would need to be... See more...
Thank you! I'm working on reproducing this in json format for Dashboard Studio, and keep getting an error that the input myast have a 'type' specified... any guidance on what that would need to be?  
@pradeepiyer2024 @tscroggins , we have recently launched EDI monitoring & analysis Solution Accelerator, love to connect and see if you are interested in evaluation and feedback.
@Naa_Win , we do have an EDI solutions accelerator.  Love to connect and give you some dump on the solution. Let me know if you are interested.
Can you test connecting on port 8089 of your Splunk server to ensure that it is not blocked by a firewall or something? The timeout sounds like a connection or firewall problem.
You could make a transforms config which tells Splunk to extract the host field from the log:   # props.conf [yoursourcetype] TRANSFORMS-anynameyouwant = arbitrarytransformname # transforms.conf [... See more...
You could make a transforms config which tells Splunk to extract the host field from the log:   # props.conf [yoursourcetype] TRANSFORMS-anynameyouwant = arbitrarytransformname # transforms.conf [arbitrarytransformname] DEST_KEY = MetaData:Host REGEX = ^[^,]*,([^,]+) FORMAT = host::$1   Once this config is applied to your indexing tier, it will set the host based on the second column in your logs. The default timestamp finder should also find the _time value from your logs in the first column, unless you are setting a sourcetype that bypasses the regular timestamp extraction. You might also try putting the logs to import into a file on the splunk machine using the cli and then making an inputs.conf to index it. You should be able to set the sourcetype either from the inputs.conf stanza or in the webUI when uploading the logs.
There is no objective "best approach" for IaC setup for Splunk Enterprise. I would recommend choosing the one which your engineers are most familiar with, and the one where the pricing structure bes... See more...
There is no objective "best approach" for IaC setup for Splunk Enterprise. I would recommend choosing the one which your engineers are most familiar with, and the one where the pricing structure best fits into your budget.
Could you try clearing your browser cache, and telling us which version of Splunk you are using? Also if you remake this panel in a fresh new dashboard, does it make the same error? 
No you shouldn't need to enter it every time as an input. You can make custom add-on settings which are not username/password and then these will be set once on app configuration and can be re-used f... See more...
No you shouldn't need to enter it every time as an input. You can make custom add-on settings which are not username/password and then these will be set once on app configuration and can be re-used for inputs.    
It does show locked-out users as well as unlocked users. Honestly, I know who is locked out and who is not. I wish it would be stated yes when it is instead of no for everyone. But the real issue I h... See more...
It does show locked-out users as well as unlocked users. Honestly, I know who is locked out and who is not. I wish it would be stated yes when it is instead of no for everyone. But the real issue I have is. How can I know what computer is locked out or if it is off-site? 
can anyone help me with the issue I get from time to time on my dashboard built using splunk dashboard studio: for some reason this error occurs only for maps here's the query | tstats ... See more...
can anyone help me with the issue I get from time to time on my dashboard built using splunk dashboard studio: for some reason this error occurs only for maps here's the query | tstats count from datamodel=Cisco_Security.ASA_Dataset where index IN (add_on_builder_index, adoption_metrics, audit_summary, ba_test, cim_modactions, cisco_duo, cisco_etd, cisco_se, cisco_secure_fw, cisco_sfw_ftd_syslog, cisco_sma, cisco_sna, cisco_xdr, duo, encore, endpoint_summary, fw_syslog, history, ioc, main, mcd, mcd_syslog, new_index_for_endpoint, notable, notable_summary, resource_usage_test_index, risk, secure_malware_analytics, sequenced_events, summary, threat_activity, ubaroute, ueba, whois) ASA_Dataset.event_type_filter_options IN (*) ASA_Dataset.severity_level_filter_options IN (*) ASA_Dataset.src_ip IN (*) ASA_Dataset.direction="outbound" groupby ASA_Dataset.dest | iplocation ASA_Dataset.dest | rename Country as "featureId" | stats count by featureId | geom geo_countries featureIdField=featureId | where isnotnull(geom) after reloading the page the error disappears    
Thanks, I used a similar configuration and now it works.  I had to use the == rather than the = index=websphere websphere_logEventType=* | stats count(websphere_logEventType) BY websphere_logEventT... See more...
Thanks, I used a similar configuration and now it works.  I had to use the == rather than the = index=websphere websphere_logEventType=* | stats count(websphere_logEventType) BY websphere_logEventType | eval websphere_logEventType=case(websphere_logEventType=="I", "INFO",websphere_logEventType=="E", "ERROR", websphere_logEventType=="W", "WARNING", websphere_logEventType=="D", DEBUG, true(),"Not Known" ) | dedup websphere_logEventType
Assuming websphere_logEventType is a string, try something like this | eval websphere_logEventType=case(websphere_logEventType="I", "INFO",websphere_logEventType="E", "ERROR", websphere_logEventType... See more...
Assuming websphere_logEventType is a string, try something like this | eval websphere_logEventType=case(websphere_logEventType="I", "INFO",websphere_logEventType="E", "ERROR", websphere_logEventType="W", "WARNING", websphere_logEventType="D", "DEBUG", true(),"Not Known" ) Otherwise, I, E, W and D are treated as field names (which don't appear to exist, hence the case evaluates to "Not Known")
Hey @NanSplk01 try  ... | eval websphere_logEventType=case(websphere_logEventType=I, "INFO", websphere_logEventType=E, "ERROR", websphere_logEventType=W, "WARNING", websphere_logEventType=D, "DE... See more...
Hey @NanSplk01 try  ... | eval websphere_logEventType=case(websphere_logEventType=I, "INFO", websphere_logEventType=E, "ERROR", websphere_logEventType=W, "WARNING", websphere_logEventType=D, "DEBUG", 1=1, "Not Known") If this Helps, Please Upvote.