All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Something like that can be done using eval. | index=scoreindex | stats values(Name) as Name, values(Subject) as Subject, sum(TotalScore) as TotalScore, max(Score1) as Score1, max(Score2) as Scor... See more...
Something like that can be done using eval. | index=scoreindex | stats values(Name) as Name, values(Subject) as Subject, sum(TotalScore) as TotalScore, max(Score1) as Score1, max(Score2) as Score2, max(Score3) as Score3 by Class | eval "Max TotalScore"=Score1 + Score2 + Score3 | table Class, Name, Subject, TotalScore, Score1, Score2, Score3, "Max TotalScore"  
Yes, lookups can support wildcards.  Go to Settings->Lookups->Lookup definitions and edit the lookup.  Tick the "Advanced options" box and enter WILDCARD(error) in the "Match type" box.  Then it's up... See more...
Yes, lookups can support wildcards.  Go to Settings->Lookups->Lookup definitions and edit the lookup.  Tick the "Advanced options" box and enter WILDCARD(error) in the "Match type" box.  Then it's up to the lookup file to have wildcards in the appropriate places.
Using subsearch results in large number of OR operators.  It's probably more economic just doing stats | inputlookup servers.csv | eval CSV = "servers" | inputlookup append=true HR.csv | fillnull CS... See more...
Using subsearch results in large number of OR operators.  It's probably more economic just doing stats | inputlookup servers.csv | eval CSV = "servers" | inputlookup append=true HR.csv | fillnull CSV value=HR | stats values(CSV) as CSV by Name ID | where mvcount(CSV) == 1 AND CSV == "servers" (Again, thanks @richgalloway for demonstrating append mode!)
I have a standalone Splunk Enterprise (not Splunk Cloud) set up to work with some log data that is stored in an AWS S3 bucket. The log data is in TSV format, each file has a header row at the top wit... See more...
I have a standalone Splunk Enterprise (not Splunk Cloud) set up to work with some log data that is stored in an AWS S3 bucket. The log data is in TSV format, each file has a header row at the top with the field names, and each file is gzipped. I have the AWS TA installed (https://splunkbase.splunk.com/app/1876). Having followed the instructions in the documentation (Introduction to the Splunk Add-on for Amazon Web Services - Splunk Documentation) for setting up a Generic S3 input, no fields are being extracted and the time stamps are not being recognized. The data does ingest but it is all just raw rows from the TSVs. The header row is being indexed as an event as well. The timestamps in Splunk are just _indextime even though there is a column called "timestamp" in the data. Does anyone have any suggestions on how I can get this to recognize the timestamps and actually show the field names that appear in the header row?
list of the URLs the contractors have access to which is the csv file.  The firewall team wants to remove any URLs that aren't used in a period of time.  Thus, I have to compare the firewall URLs t... See more...
list of the URLs the contractors have access to which is the csv file.  The firewall team wants to remove any URLs that aren't used in a period of time.  Thus, I have to compare the firewall URLs to the csv  So, the firewall team wants to update that CSV file so it will not contain entries that haven't had matching events for a given time period.  Is this correct?  This seems to be the opposite of what the Splunk search is doing. Some more points you need to clarify. What are field name(s) the index search and the lookup file use to indicate URLs?  Based on your code snippet, I assume that they both use url. Does the CSV file contain additional fields?  Based on your code snippet, I will assume none. Is there some significance of trailing slash (/)?  Do all url values end with one trailing slash?  This may not be relevant, but some SPL manipulations may ruin your convention.  So, I'd like to be cautious. A more important question is the use of asterisk (*).  Are the last two domains (root and second level) the only parts of interest?  Given all the illustrations, I have to assume yes.  In other words, no differentiation is needed between *.microsoft.com/ and microsoft.com/.  Additionally, I will assume that every url in the CSV needs to be paired with a wildcard entry. Using the above assumptions, the following can show you second level domains that have not been used. index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) | eval url = mvjoin(mvindex(split(url, "."), -2,-1), ".") | dedup url | inputlookup append=true my_list_of_urls.csv | fillnull sourcetype value=CSV | stats values(sourcetype) as sourcetype by url | where mvcount(sourcetype) == 1 AND sourcetype == "CSV" | eval url = mvappend(url, "*." . url) | mvexpand url The output contains a list of second level domains affixed with a trailing slash, and these same strings prefixed with "*.".  These would be the ones to be removed. If you have lots of events with URLs that have no match in the CSV, you can also use the subsearch as a filter to improve efficiency.  Like index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ] | eval url = mvjoin(mvindex(split(url, "."), -2,-1), ".") | dedup url | inputlookup append=true my_list_of_urls.csv | fillnull sourcetype value=CSV | stats values(sourcetype) as sourcetype by url | where mvcount(sourcetype) == 1 AND sourcetype == "CSV" | eval url = mvappend(url, "*." . url) | mvexpand url Hope this helps.
Are you talking about the various dropdowns/multiselect inputs for a Dashboard, and waiting for that data to populate so they have something to select?  Like this screenshot:   If so, one way t... See more...
Are you talking about the various dropdowns/multiselect inputs for a Dashboard, and waiting for that data to populate so they have something to select?  Like this screenshot:   If so, one way to speed this up is run the search behind those dynamic values as a scheduled search, have that query put the results into a lookup.  And then, the search to populate your inputs are just loading that lookup with | inputlookup. If this is what you're thinking, then I can provide some more resources to get headed in that direction.  Also, are you building dashboards in SimpleXML or Dashboard Studio?
First, to answer the question posed in the OP, yes, you need to make modifications to successfully blacklist the events.  The regular expression must be valid and correct or it will not match the dat... See more...
First, to answer the question posed in the OP, yes, you need to make modifications to successfully blacklist the events.  The regular expression must be valid and correct or it will not match the data and events will not be dropped as desired. For instance, the '/' character must be escaped.  Literal parentheses (as in "Program Files(x86)") must be escaped.  There should not be any newlines in the expression.  Test the regex with matching and non-matching sample data at regex101.com. Finally, I'm not positive about the debug log setting since I don't know that Splunk will log the information you seek.  If it does, however, it will be in the UF and not in the DS.  Go to Settings->Server settings->Server Logging and search for channels with "regex" in their names.  Set the value for likely candidates to DEBUG.  Be aware that this may be extremely verbose and should not be enabled for long.  
Hello, Sorry I made a mistake on my original post. I just updated my question I am looking for Max Total Score, a total score after the aggregation (260), before the aggregation Max(TotalScore) i... See more...
Hello, Sorry I made a mistake on my original post. I just updated my question I am looking for Max Total Score, a total score after the aggregation (260), before the aggregation Max(TotalScore) is 240 only for 1 row.      Please suggest. Thank you Max Total Score = Max(Score1) + Max(Score2) + Max(Score3) = 85+95+80 = 260 This is the output I need Class Name Subject TotalScore Score1 Score2   Score3 Max TotalScore ClassA grouped grouped 240 85 95 80 260
Hello, Sorry I made a mistake on my original post. I just updated my question I am looking for Max Total Score, a total score after the aggregation Please suggest. Thank you Max Total Score =... See more...
Hello, Sorry I made a mistake on my original post. I just updated my question I am looking for Max Total Score, a total score after the aggregation Please suggest. Thank you Max Total Score = Max(Score1) + Max(Score2) + Max(Score3) = 85+95+80 = 260 This is the output I need Class Name Subject TotalScore Score1 Score2   Score3 Max TotalScore ClassA grouped grouped 240 85 95 80 260
The real pro-tip is always in the comments. 
You *think* your search will produce that output?  Why not run the search and remove the doubt? To calculate a total, use the sum function. | index=scoreindex | stats values(Name) as Name, value... See more...
You *think* your search will produce that output?  Why not run the search and remove the doubt? To calculate a total, use the sum function. | index=scoreindex | stats values(Name) as Name, values(Subject) as Subject, sum(TotalScore) as TotalScore, max(Score1) as Score1, max(Score2) as Score2, max(Score3) as Score3, max(TotalScore) as "Max TotalScore" by Class | table Class, Name, Subject, TotalScore, Score1, Score2, Score3, "Max TotalScore"  
How is this different than what you asked here: Solved: Re: How to display other fields on the same row wh... - Splunk Community It's nearly the same question, and that linked post seems to have th... See more...
How is this different than what you asked here: Solved: Re: How to display other fields on the same row wh... - Splunk Community It's nearly the same question, and that linked post seems to have the answer to this post in it.
The error is reported because the specified file, /opt/splunk/etc/apps/myapp_hf/bin/app.py, doesn't exist.  I'm guessing the error stems from the change to app.conf, but that depends on what exactly ... See more...
The error is reported because the specified file, /opt/splunk/etc/apps/myapp_hf/bin/app.py, doesn't exist.  I'm guessing the error stems from the change to app.conf, but that depends on what exactly you changed.  Please share a diff of app.conf.  In general, you can change the app's label (the name users see) in app.conf at will, but changes to the app id require a matching change to the subdirectory in which the app is stored.
Just some thoughts from a philosophical perspective... Splunk loves to parse/extract/search data, and the overall architecture to me lets us treat comput and storage as a total commodity in the purs... See more...
Just some thoughts from a philosophical perspective... Splunk loves to parse/extract/search data, and the overall architecture to me lets us treat comput and storage as a total commodity in the pursuit of "searching and making sense of our data."  So let Splunk do it's thing...if you have to do some extra parsing, do some extra parsing to get the problem solved.  Then optimize. That's sort of my same coding philosophy that I just pull forward into what I do in Splunk:  get it working, then get it working well. So if you get it working...and it isn't the slowest thing in your environment, then let all of your distributed compute do its thing until the "cost" of your time to optimize outweighs the "cost" of the extra processing time spent running your query/extracts.
Hi caryc3, For MC Integration with existing SOAR instances, we do not have a release date. Please reach out to your account manager with any request.
Hi, You don't have to configure an SMTP server in order to send emails from your detectors/alerts. You can just add email recipients when you build your alert. Splunk Observability Cloud will send t... See more...
Hi, You don't have to configure an SMTP server in order to send emails from your detectors/alerts. You can just add email recipients when you build your alert. Splunk Observability Cloud will send them for you.  
Did you delete the app you originally created on the heavy forwarder (HF) before re-deploying your app via deployment server (DS)?    The other thing I would confirm is what permissions things are ... See more...
Did you delete the app you originally created on the heavy forwarder (HF) before re-deploying your app via deployment server (DS)?    The other thing I would confirm is what permissions things are being deployed as - for example, what are the permissions on your python scripts between what worked, what is on the DS's copy of the app, and what is ending up on the HF once the app is deployed.
Perhaps something like | inputlookup servers.csv where NOT [|inputlookup HR.csv | format]
@woodcock wrote: Like this: ... | reverse | streamstats current=f last(Magnitude) as prevMagnitude ... Now that each event contains the magnitude of the previous event you no longer need any co... See more...
@woodcock wrote: Like this: ... | reverse | streamstats current=f last(Magnitude) as prevMagnitude ... Now that each event contains the magnitude of the previous event you no longer need any correlation between events so you can tack whatever you like onto the end: ... | search magnitude > 3.3 Incredible answer!  So concise and powerful!
This is how I have been able to access these things via REST. The first thing you need to make sure is you have the "Add to Triggered Alerts" Alert Action that you want to be see these in the GUI or... See more...
This is how I have been able to access these things via REST. The first thing you need to make sure is you have the "Add to Triggered Alerts" Alert Action that you want to be see these in the GUI or REST.  By default, Splunk will run alerts you configure, but won't necessarily "track" them unless you explicitly tell it to.  It looks like this in the GUI: Once you add that Alert Action and some alerts fire, you'll see the triggered events in the GUI and via this REST endpoint:   /alerts/fired_alerts/     Once you're getting your list of triggered alerts, then you can find the sid value within the data returned and then use the other REST endpints to fetch stuff for the actual search that was ran.  Here's a screenshot of a bit of the output from the above rest endpoint on my test environment: Also - note that the Expire setting for the alert will control how long Splunk keeps those results around for those sids: