All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Sure ! Thanks. Let me know how we can do this
According to the docs the MAX_TIMESTAMP_LOOKAHEAD is applied _from_ the TIME_PREFIX-defined location.
Generally these format options apply to the cell - so you can colour a cell using the expression option, but you need to specify the field as part of the <format> specifier, see https://docs.splunk.... See more...
Generally these format options apply to the cell - so you can colour a cell using the expression option, but you need to specify the field as part of the <format> specifier, see https://docs.splunk.com/Documentation/SplunkCloud/latest/Viz/TableFormatsXML#Color_format_rules Take a look at the dashboard examples app https://splunkbase.splunk.com/app/1603 as that has some Javascript examples on how to colour rows based on cell values.  See the Table Row Highlighting example.    
So now you need to set the time prefix to match your actual raw text, i.e. "time":"2024... AND you need the lookahead set, because time is at the end of your JSON. Your raw data does not appear to ... See more...
So now you need to set the time prefix to match your actual raw text, i.e. "time":"2024... AND you need the lookahead set, because time is at the end of your JSON. Your raw data does not appear to have any whitespace in between the fields/colon/value, so try   MAX_TIMESTAMP_LOOKAHEAD = 550 TIME_PREFIX = \"time\":\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%Z   EDIT: MAX_TIMESTAMP_LOOKAHEAD not needed - see @PickleRick comment below.
Yeah, I ran into the same issue. I had to make all my changes again after losing them the first time. When clicking Finish after making the Python changes, it resets those scripts to defaults, bas... See more...
Yeah, I ran into the same issue. I had to make all my changes again after losing them the first time. When clicking Finish after making the Python changes, it resets those scripts to defaults, based on the inputs in the add-on builder. You need to setup any other inputs that you need configured in the add-on Builder, then add the fields and code to those Python scripts. Might be best to keep a backup of them with the changes applied, in case you need to do anything else in the add-on builder in future and you accidentally overwrite them again.
Let me first comment that your use case should NOT be a freetext "search box" as input.  It should be a multiselect.  Play with the following example and see if it fits your needs: <form version="1.... See more...
Let me first comment that your use case should NOT be a freetext "search box" as input.  It should be a multiselect.  Play with the following example and see if it fits your needs: <form version="1.1" theme="light"> <label>Multivalue input</label> <description>https://community.splunk.com/t5/Splunk-Search/How-to-filter-events-using-text-box-values/m-p/704698</description> <fieldset submitButton="false"> <input type="multiselect" token="multivalue_field_tok" searchWhenChanged="true"> <label>select all field values</label> <choice value="INFO">INFO</choice> <choice value="WARNING">WARNING</choice> <choice value="ERROR">ERROR</choice> <choice value="*">All</choice> <default>*</default> </input> <input type="multiselect" token="multivalue_term_tok" searchWhenChanged="true"> <label>select all terms</label> <choice value="INFO">INFO</choice> <choice value="WARNING">WARNING</choice> <choice value="ERROR">ERROR</choice> <choice value="*">All</choice> <default>*</default> <delimiter> OR </delimiter> </input> </fieldset> <row> <panel> <event> <search> <query>index = _internal log_level IN ($multivalue_field_tok$)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> <panel> <event> <title>no field name</title> <search> <query>index = _internal ($multivalue_term_tok$)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form> If comma-delimited freetext term input is needed, it is doable, but will not be as efficient as the above.  Please state your use case clearly (without help of SPL) so volunteers can give you concrete help.
Assuming the values of the groupby field, namely field1, is stable ("output1", "output2"), the solution depends on how granular you want the timechart to be.  If timechart itself is 10min, the simple... See more...
Assuming the values of the groupby field, namely field1, is stable ("output1", "output2"), the solution depends on how granular you want the timechart to be.  If timechart itself is 10min, the simplest solution would be index=sample sample="value1" | timechart span=10m count by field1 | where output1 > 0.3 * output2 Else you need to perform stats twice as @gcusello suggests, but change the where command to fit your requirement.  Consider a case where your timechart is sparser than 10m, say 1h.  You can do index=sample sample="value1" | timechart span=10m count by field1 | where output1 > 0.3 * output2​ | timechart span=1h sum(count) To have a timechart more granular than 10min, you'll have to do some crazy math but it's also doable.
CEF is a fairly annoying format to deal with. It has some part defined one way - as delimited values, and another as key=value pairs. There is an app on Splunkbase for handling CEF events - https://s... See more...
CEF is a fairly annoying format to deal with. It has some part defined one way - as delimited values, and another as key=value pairs. There is an app on Splunkbase for handling CEF events - https://splunkbase.splunk.com/app/487 But I don't remember if it's any good TBH.
Let's try one last time! In props.conf, the source stanza actually follows a modified regex syntax. (See my other comment from 19-nov-2024 or just read props.conf.spec). So the following should work... See more...
Let's try one last time! In props.conf, the source stanza actually follows a modified regex syntax. (See my other comment from 19-nov-2024 or just read props.conf.spec). So the following should work [source::/opt/splunk/var/log/syslog/*lx0001*/*] or perhaps [source::/opt/splunk/var/log/syslog/*lx0001*/*.log] But re-reading the original post, @jonatanjosefson was actually trying to set some property for all syslog events corresponding to a particular host name pattern - you can do that using a host stanza instead of a source stanza: host::*lx0001* To me, this seems easier to understand and maintain, as it will work even if the directory structure of the syslog files changes over time. It depends only on the hostname, and not the file name or location.      
Typically most of the fields Splunk uses are so-called "search-time fields" - Splunk parses them out during the search. Here you're extracting the fqdn as an indexed fields which means you're parsing... See more...
Typically most of the fields Splunk uses are so-called "search-time fields" - Splunk parses them out during the search. Here you're extracting the fqdn as an indexed fields which means you're parsing it out during indexing and writing an additional field into the metadata files along the data itself. This has its cons (like immutability - once you've extracted it, you cannot fix it in case something went wrong or additional space usage) and usually does not have many pros. The most obvious advantage of having an indexed field is speed if you're using summaries on this field - then you can use the tstats command and it's lightning fast compared to normal event search and summarizing with the normal stats command. But other than that there are few cases when indexed fields are called for. But that's really an advanced topic. 4. Yes, if you plan to restrict access to data, multiple indexes is indeed the way to go. 5. When you're using INDEX_EVAL to set a value with a normal = assignment, if that field already has a value, a new value is _added_ to that field creating a multivalued field. If you use := assignment, the old value - if present - is overwritten. I'm not 100% sure how would Splunk treat a multivalued index field (well, index is not technically a field stored in metadata along the event itself, it's just where the event is written). So just to be on the safe side I'd use := 6. You're doing a lot of json parsing and rendering and doing lookups on csv (which will be done linearly) so that might have a noticeable performance impact on your Splunk instance if you have a lot of data. You simply might be able to both write it in an easier maintainable way and have it perform better if you implemented this logic one step earlier - in your syslog daemon.
For the source stanza, Splunk uses regular expressions that are PCRE (Perl Compatible Regular Expressions). From props.conf.spec **[source::<source>] and [host::<host>] stanza match language:** Ma... See more...
For the source stanza, Splunk uses regular expressions that are PCRE (Perl Compatible Regular Expressions). From props.conf.spec **[source::<source>] and [host::<host>] stanza match language:** Match expressions must match the entire name, not just a substring. Match expressions are based on a full implementation of Perl-compatible regular expressions (PCRE) with the translation of "...", "*", and "." Thus, "." matches a period, "*" matches non-directory separators, and "..." matches any number of any characters. Also from props.conf.spec When setting a [<spec>] stanza, you can use the following regex-type syntax: ... recurses through directories until the match is met or equivalently, matches any number of characters. * matches anything but the path separator 0 or more times. The path separator is '/' on unix, or '\' on Windows. Intended to match a partial or complete directory or filename. | is equivalent to 'or' ( ) are used to limit scope of |. \\ = matches a literal backslash '\'. So for mylog_*  you could specify source::.../mylog_* It's been a few years on this one, so hope I am right this time!
You can't use standard built-in mechanisms to do that directly because they use the same result set. You can try to use some walkarounds as @marnall showed The other way is to either write your own ... See more...
You can't use standard built-in mechanisms to do that directly because they use the same result set. You can try to use some walkarounds as @marnall showed The other way is to either write your own custom commend (which is cumbersome) or group your results into single mailable "items", render the mail body on your own and use the map command to call sendresults (you need to install that app first of course).  
My example is more like pseudo-code than something you could paste into a dashboard.  No doubt there are many blanks to be filled in. JSON input types are in the manual at https://docs.splunk.com/Do... See more...
My example is more like pseudo-code than something you could paste into a dashboard.  No doubt there are many blanks to be filled in. JSON input types are in the manual at https://docs.splunk.com/Documentation/Splunk/9.3.2/DashStudio/inputConfig#Input_configuration_options_available_in_the_visual_editor
Hi Marnall, I curl into my tenant and it looks like none of the IPs that I asked to be whitelisted are getting through. That could be the reason this issue having. Thank you for the reply.
One thing you could do is save your search as an alert that triggers rapidly, which uses a different recipient email value that is supplied by a lookup table and iterates using another lookup table c... See more...
One thing you could do is save your search as an alert that triggers rapidly, which uses a different recipient email value that is supplied by a lookup table and iterates using another lookup table containing the email addresses that have already received an email. MAIN SEARCH: <yoursearch> | search [| inputlookup list_all_emails.csv | table action | search NOT [| inputlookup done_sending_email.csv | table action | dedup action] | head 1] | outputlookup done_sending_email.csv append=true You can generate the done_sending_email csv with this search: | makeresults | eval action = "randomfillervaluethatisnotanemail" | outputlookup done_sending_email.csv And generate the list_all_emails.csv with this search: <yoursearch> | dedup action | table action | outputlookup list_all_emails.csv Once the 2 lookup tables are generated, run the main search a few times to see if it iterates through the results for the first few users. If it works, then regenerate the done_sending_email.csv lookup and then save the main search it as an alert. In the Alert settings, scroll down to "When Triggered", then set the To: field to be $result.action$, and then set the rest of the "send email" options to your preference. Set the cron schedule to be something rapid like * * * * * or */5 * * * *, then save the alert. You can then wait as your alert sends a different email to a different user on each execution, containing only the results relevant to them.  Once the done_sending_email.csv and list_all_emails.csv lookup tables are almost the same size, (done_sending_email.csv will be +1 bigger if it has the filler value) then the emails are all sent out. You can then disable the alert, or you can empty the done_sending_email.csv file if you'd like to send another wave of emails.
Thank you! I'm working on reproducing this in json format for Dashboard Studio, and keep getting an error that the input myast have a 'type' specified... any guidance on what that would need to be... See more...
Thank you! I'm working on reproducing this in json format for Dashboard Studio, and keep getting an error that the input myast have a 'type' specified... any guidance on what that would need to be?  
@pradeepiyer2024 @tscroggins , we have recently launched EDI monitoring & analysis Solution Accelerator, love to connect and see if you are interested in evaluation and feedback.
@Naa_Win , we do have an EDI solutions accelerator.  Love to connect and give you some dump on the solution. Let me know if you are interested.
Can you test connecting on port 8089 of your Splunk server to ensure that it is not blocked by a firewall or something? The timeout sounds like a connection or firewall problem.
You could make a transforms config which tells Splunk to extract the host field from the log:   # props.conf [yoursourcetype] TRANSFORMS-anynameyouwant = arbitrarytransformname # transforms.conf [... See more...
You could make a transforms config which tells Splunk to extract the host field from the log:   # props.conf [yoursourcetype] TRANSFORMS-anynameyouwant = arbitrarytransformname # transforms.conf [arbitrarytransformname] DEST_KEY = MetaData:Host REGEX = ^[^,]*,([^,]+) FORMAT = host::$1   Once this config is applied to your indexing tier, it will set the host based on the second column in your logs. The default timestamp finder should also find the _time value from your logs in the first column, unless you are setting a sourcetype that bypasses the regular timestamp extraction. You might also try putting the logs to import into a file on the splunk machine using the cli and then making an inputs.conf to index it. You should be able to set the sourcetype either from the inputs.conf stanza or in the webUI when uploading the logs.