All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

First, to answer the question posed in the OP, yes, you need to make modifications to successfully blacklist the events.  The regular expression must be valid and correct or it will not match the dat... See more...
First, to answer the question posed in the OP, yes, you need to make modifications to successfully blacklist the events.  The regular expression must be valid and correct or it will not match the data and events will not be dropped as desired. For instance, the '/' character must be escaped.  Literal parentheses (as in "Program Files(x86)") must be escaped.  There should not be any newlines in the expression.  Test the regex with matching and non-matching sample data at regex101.com. Finally, I'm not positive about the debug log setting since I don't know that Splunk will log the information you seek.  If it does, however, it will be in the UF and not in the DS.  Go to Settings->Server settings->Server Logging and search for channels with "regex" in their names.  Set the value for likely candidates to DEBUG.  Be aware that this may be extremely verbose and should not be enabled for long.  
Hello, Sorry I made a mistake on my original post. I just updated my question I am looking for Max Total Score, a total score after the aggregation (260), before the aggregation Max(TotalScore) i... See more...
Hello, Sorry I made a mistake on my original post. I just updated my question I am looking for Max Total Score, a total score after the aggregation (260), before the aggregation Max(TotalScore) is 240 only for 1 row.      Please suggest. Thank you Max Total Score = Max(Score1) + Max(Score2) + Max(Score3) = 85+95+80 = 260 This is the output I need Class Name Subject TotalScore Score1 Score2   Score3 Max TotalScore ClassA grouped grouped 240 85 95 80 260
Hello, Sorry I made a mistake on my original post. I just updated my question I am looking for Max Total Score, a total score after the aggregation Please suggest. Thank you Max Total Score =... See more...
Hello, Sorry I made a mistake on my original post. I just updated my question I am looking for Max Total Score, a total score after the aggregation Please suggest. Thank you Max Total Score = Max(Score1) + Max(Score2) + Max(Score3) = 85+95+80 = 260 This is the output I need Class Name Subject TotalScore Score1 Score2   Score3 Max TotalScore ClassA grouped grouped 240 85 95 80 260
The real pro-tip is always in the comments. 
You *think* your search will produce that output?  Why not run the search and remove the doubt? To calculate a total, use the sum function. | index=scoreindex | stats values(Name) as Name, value... See more...
You *think* your search will produce that output?  Why not run the search and remove the doubt? To calculate a total, use the sum function. | index=scoreindex | stats values(Name) as Name, values(Subject) as Subject, sum(TotalScore) as TotalScore, max(Score1) as Score1, max(Score2) as Score2, max(Score3) as Score3, max(TotalScore) as "Max TotalScore" by Class | table Class, Name, Subject, TotalScore, Score1, Score2, Score3, "Max TotalScore"  
How is this different than what you asked here: Solved: Re: How to display other fields on the same row wh... - Splunk Community It's nearly the same question, and that linked post seems to have th... See more...
How is this different than what you asked here: Solved: Re: How to display other fields on the same row wh... - Splunk Community It's nearly the same question, and that linked post seems to have the answer to this post in it.
The error is reported because the specified file, /opt/splunk/etc/apps/myapp_hf/bin/app.py, doesn't exist.  I'm guessing the error stems from the change to app.conf, but that depends on what exactly ... See more...
The error is reported because the specified file, /opt/splunk/etc/apps/myapp_hf/bin/app.py, doesn't exist.  I'm guessing the error stems from the change to app.conf, but that depends on what exactly you changed.  Please share a diff of app.conf.  In general, you can change the app's label (the name users see) in app.conf at will, but changes to the app id require a matching change to the subdirectory in which the app is stored.
Just some thoughts from a philosophical perspective... Splunk loves to parse/extract/search data, and the overall architecture to me lets us treat comput and storage as a total commodity in the purs... See more...
Just some thoughts from a philosophical perspective... Splunk loves to parse/extract/search data, and the overall architecture to me lets us treat comput and storage as a total commodity in the pursuit of "searching and making sense of our data."  So let Splunk do it's thing...if you have to do some extra parsing, do some extra parsing to get the problem solved.  Then optimize. That's sort of my same coding philosophy that I just pull forward into what I do in Splunk:  get it working, then get it working well. So if you get it working...and it isn't the slowest thing in your environment, then let all of your distributed compute do its thing until the "cost" of your time to optimize outweighs the "cost" of the extra processing time spent running your query/extracts.
Hi caryc3, For MC Integration with existing SOAR instances, we do not have a release date. Please reach out to your account manager with any request.
Hi, You don't have to configure an SMTP server in order to send emails from your detectors/alerts. You can just add email recipients when you build your alert. Splunk Observability Cloud will send t... See more...
Hi, You don't have to configure an SMTP server in order to send emails from your detectors/alerts. You can just add email recipients when you build your alert. Splunk Observability Cloud will send them for you.  
Did you delete the app you originally created on the heavy forwarder (HF) before re-deploying your app via deployment server (DS)?    The other thing I would confirm is what permissions things are ... See more...
Did you delete the app you originally created on the heavy forwarder (HF) before re-deploying your app via deployment server (DS)?    The other thing I would confirm is what permissions things are being deployed as - for example, what are the permissions on your python scripts between what worked, what is on the DS's copy of the app, and what is ending up on the HF once the app is deployed.
Perhaps something like | inputlookup servers.csv where NOT [|inputlookup HR.csv | format]
@woodcock wrote: Like this: ... | reverse | streamstats current=f last(Magnitude) as prevMagnitude ... Now that each event contains the magnitude of the previous event you no longer need any co... See more...
@woodcock wrote: Like this: ... | reverse | streamstats current=f last(Magnitude) as prevMagnitude ... Now that each event contains the magnitude of the previous event you no longer need any correlation between events so you can tack whatever you like onto the end: ... | search magnitude > 3.3 Incredible answer!  So concise and powerful!
This is how I have been able to access these things via REST. The first thing you need to make sure is you have the "Add to Triggered Alerts" Alert Action that you want to be see these in the GUI or... See more...
This is how I have been able to access these things via REST. The first thing you need to make sure is you have the "Add to Triggered Alerts" Alert Action that you want to be see these in the GUI or REST.  By default, Splunk will run alerts you configure, but won't necessarily "track" them unless you explicitly tell it to.  It looks like this in the GUI: Once you add that Alert Action and some alerts fire, you'll see the triggered events in the GUI and via this REST endpoint:   /alerts/fired_alerts/     Once you're getting your list of triggered alerts, then you can find the sid value within the data returned and then use the other REST endpints to fetch stuff for the actual search that was ran.  Here's a screenshot of a bit of the output from the above rest endpoint on my test environment: Also - note that the Expire setting for the alert will control how long Splunk keeps those results around for those sids:        
      How to get the exception from the below tables. Exception is John who is not HR table .     User list from the servers.   Name  ID  Bill 23 Peter 24 john  25   HR T... See more...
      How to get the exception from the below tables. Exception is John who is not HR table .     User list from the servers.   Name  ID  Bill 23 Peter 24 john  25   HR Table  Name  ID  Bill  23 Peter  24 Anita 27
Hello all, I installed a Splunk add-on on my heavy forwarder just to test it first, it worked fine. After that I copied it (the entire directory) to the deployment server and I pushed it to the hea... See more...
Hello all, I installed a Splunk add-on on my heavy forwarder just to test it first, it worked fine. After that I copied it (the entire directory) to the deployment server and I pushed it to the heavy forwarder because, you know, I want to manage everything from the deployment server (trying to be organized ) The issue is, from the heavy forwarder GUI, when i click on the app icon it doesn't load: it gives me "500 Internal Server Error" (with the picture of the confused horse ) and I have these error messages from the internal logs:  "ERROR ExecProcessor [2341192 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/myapp_hf/bin/app.py" HTTP 404 Not Found -- Action forbidden." I forgot to mention that I changed the name of the original app in app.conf  I can't figure out why it is not working   Thanks for your help, Kaboom1
@richgalloway  Where exactly we can see this debug log setting in the DS?  
Hi,  I'm trying to use the REST API to get and post saved searches that are Alerts but for some reason it only returns data for Reports. Has anyone else had this problem?  GET https://<host>:<mP... See more...
Hi,  I'm trying to use the REST API to get and post saved searches that are Alerts but for some reason it only returns data for Reports. Has anyone else had this problem?  GET https://<host>:<mPort>/services/saved/searches https://<host>:<mPort>/services/saved/searches/{name}
Hi. You can convert your time to epoch values and then subtract them. Here's an example: | makeresults | eval start="10/10/23 23:50:00.031 PM", end="11/10/23 00:50:00.031 AM PM" | eval startepoch=s... See more...
Hi. You can convert your time to epoch values and then subtract them. Here's an example: | makeresults | eval start="10/10/23 23:50:00.031 PM", end="11/10/23 00:50:00.031 AM PM" | eval startepoch=strptime('start',"%m/%d/%y %H:%M:%S.%3N") | eval endepoch=strptime('end',"%m/%d/%y %H:%M:%S.%3N") | eval diff=endepoch-startepoch | eval timediff=tostring(diff,"duration")
@yuanliuYeah, it is a pain of a search.  Here is the issue.  A firewall device generates an event with URL when certain policies are triggered by contractors.  That is the initial search.  The firewa... See more...
@yuanliuYeah, it is a pain of a search.  Here is the issue.  A firewall device generates an event with URL when certain policies are triggered by contractors.  That is the initial search.  The firewall team has a list of the URLs the contractors have access to which is the csv file.  The firewall team wants to remove any URLs that aren't used in a period of time.  Thus, I have to compare the firewall URLs to the csv URLs and output any csv URLs that aren't used in the time frame.  The search finds the firewall events.   index=my_index sourcetype=my_sourcetype (rule=policy_1 OR rule=policy_2 OR rule=policy_3) [ | inputlookup my_list_of_urls.csv ]   My issue is the firewall events use the long URL and not the short one.  From the firewall a478076af2deaf28abcbe5ceb8bdb648.fp.measure.office.com/ aad.cs.dds.microsoft.com/ From the csv file *.microsoft.com/ microsoft.com/ *.office.com/ office.com/ The two events from the firewall mean that the two listed in the csv file are still good and don't need to be removed.  I try to think of this as two sets, one the firewall results and the other the csv file, but I can't figure out how to search the firewall results with what is in the csv file. This make sense? TIA, Joe