Activity Feed
- Got Karma for Re: Ingesting Azure Logs (IaaS, PaaS, OS, app, etc.). 05-27-2024 05:50 PM
- Got Karma for Re: Stdev against Null Time Bucket. 05-16-2022 07:18 AM
- Posted Heavy Forwarded Filtering Hosts on Getting Data In. 01-19-2021 09:23 AM
- Posted Re: Ingesting Azure Logs (IaaS, PaaS, OS, app, etc.) on All Apps and Add-ons. 08-06-2020 07:56 AM
- Posted Ingesting Azure Logs (IaaS, PaaS, OS, app, etc.) on All Apps and Add-ons. 07-13-2020 10:53 AM
- Karma Re: Adaptive Response & Notable Race Condition for kchamplin_splun. 06-05-2020 12:50 AM
- Karma Re: Why are bucket times expanding? for woodcock. 06-05-2020 12:50 AM
- Got Karma for Multiple Account Lockout Correlation. 06-05-2020 12:50 AM
- Got Karma for Why are bucket times expanding?. 06-05-2020 12:50 AM
- Karma Re: How to use "where" and "not in" and "like" in one query for HiroshiSatoh. 06-05-2020 12:49 AM
- Karma Re: split multi value fields for sundareshr. 06-05-2020 12:48 AM
- Karma Re: How do I create a table result with "stats count by ‘field’"? for Runals. 06-05-2020 12:47 AM
- Posted Monitor File Activity on SMB Share on Getting Data In. 05-22-2020 09:18 AM
- Tagged Monitor File Activity on SMB Share on Getting Data In. 05-22-2020 09:18 AM
- Tagged Monitor File Activity on SMB Share on Getting Data In. 05-22-2020 09:18 AM
- Posted Re: Adaptive Response Not Pulling Variables on Splunk Enterprise Security. 12-18-2019 09:43 AM
- Posted Adaptive Response Not Pulling Variables on Splunk Enterprise Security. 12-17-2019 01:23 PM
- Tagged Adaptive Response Not Pulling Variables on Splunk Enterprise Security. 12-17-2019 01:23 PM
- Tagged Adaptive Response Not Pulling Variables on Splunk Enterprise Security. 12-17-2019 01:23 PM
- Posted Re: Stdev against Null Time Bucket on Deployment Architecture. 12-12-2019 11:09 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
0 | |||
1 | |||
0 | |||
0 | |||
0 |
05-27-2024
05:50 PM
HI Cna you share any reference document
... View more
03-15-2023
04:25 PM
Not sure if you are still monitoring these, but I wanted to use this solution and compare the current count against the output of this query in a dashboard panel and its driving me nuts. So if the current hourly count is 5 at 6pm, compare it to the output of the 6PM average from this solution.
... View more
12-14-2022
02:13 AM
Hi. There are a defined list of field names that will show up in Incident Review in Splunk ES. To get a new field added to that list, i.e. "unique_count", you must add it in the list "Incident Review - Event Attributes" under Configure > Incident Management > Incident Review Settings.
... View more
01-19-2021
09:23 AM
Hello, I've ready a ton of forums posts regarding this but I still cannot get it to work so I'm hoping someone could point out what I'm doing wrong. The scenario I have is there are multiple hosts with the Splunk agent installed on it and we're currently logging that data to our Splunk indexers + a syslog server. For a short period of time, I want to send a subset of logs only to syslog but I can't seem to get that to work. Below is my current config on my heavy forwarders. I expect this to send all hosts with server* to Splunk and syslog but only endpoint* to syslog. Right now no matter what I do, everything still goes to Splunk. I even fully commented out the routeSubset section and "splunk reload deploy-server" and I still got those logs in Splunk Any thoughts would be greatly appreciated. props.conf [source::WinEventLog:Security] TRUNCATE = 0 SEDCMD-win = s/(?mis)(Token\sElevation\sType\sindicates|This\sevent\sis\sgenerated).*$//g TRANSFORMS-routing = routeSubset, routeSubset2 transforms.conf [routeSubset] SOURCE_KEY=MetaData:Host REGEX=(?i)^server[0-9][0-9].* DEST_KEY=_TCP_ROUTING FORMAT=splunkssl [routeSubset2] SOURCE_KEY=MetaData:Host REGEX=(?i)(.*endpoint[0-9][0-9].*|^server[0-9][0-9].*) DEST_KEY=_SYSLOG_ROUTING FORMAT=syslog_server
... View more
Labels
- Labels:
-
heavy forwarder
12-18-2019
09:43 AM
I've tried that as well and it still doesn't appear to be working.
Code:
events = helper.get_events()
for event in events:
print(event)
risk_object = event.get("risk_object")
helper.log_info("event.get(\"risk_object\")={}".format(risk_object))
risk_object_type = event.get("risk_object_type")
helper.log_info("event.get(\"risk_object_type\")={}".format(risk_object_type))
risk_message = event.get("risk_message")
helper.log_info("event.get(\"risk_message\")={}".format(risk_message))
Output:
signature="event.get("risk_object_type")=None"
... View more
12-11-2019
08:48 AM
1 Karma
Hello @ericl42 - there isn't a straightforward way to achieve what you're asking, with the current implementation of alert actions w/in Splunk Enterprise (this is what Adaptive Response is built on top of). Right now, all alert/adaptive response actions attached to a correlation search, run basically simultaneously. This means that things that are search-time constructs like the notable id (aka "rule_id") value that would be used to update the stats of a Notable, doesn't yet exist and is therefore not accessible to other AR actions being run. Your best bet is to set up an external saved search that looks for the "source" value of the notable(s) you want to auto-close, and attach your AR action to that saved search. This should allow you to access the value of "rule_id" from those search results and you can then operate on that notable as you see fit.
... View more
11-26-2019
06:44 AM
Thank you very much. Yesterday before this post I accidentally started testing with aligntime and it seemed to fix the issue but I wasn't 100% why. I don't think I can use sliding windows because I'm mocking all of these rules up for ES correlation searches.
... View more
11-18-2019
12:26 PM
Hello!
The foundation of standard deviation-based detections is that you are pulling a large baseline, and in the case of these detections that you're looking at a per-entity baseline. E.g., comparing one src_ip to its own baseline, rather than any other src_ips in the organization. This is why you need a long baseline, because if you don't know what "normal" looks like you can't define what "abnormal" looks like.
In your case, you probably want to compare a src_ip to all src_ips rather than comparing a src_ip to its own baseline. In that case, I would instead use the "Sources Sending Many DNS Requests" as a format rather than the standard time series, as it also looks over the short term (grouping by the hour) and compares across multiple different hosts.
https://docs.splunksecurityessentials.com/content-detail/showcase_huge_volume_dns_requests/
Minor note: You're using the "demo data" version of the SSE queries rather than the live data version (the live data version doesn't use the eventstats). This minor, but swapping maxtime for now() and getting rid of that eventstats may make the search easier to follow. (Tried to build it out but having issues with the formatting here on SplunkBase. I can always email it to you.)
... View more
11-19-2019
03:24 AM
for scenario 1, as you pointed, we could use get_events(). Lets assume, your correlation search returns 2 results. So, in that case
events = helper.get_events()
for event in events:
helper.log_info("myevent={}.format(event))
So, you can get both the results which you can iterate through and take the dictionary object and/or parse it to select the field you need from the event to sent to ticketing system
... View more
08-22-2019
01:00 PM
So, if Bob is logged into Splunk, you want all correlation searches to pass Bob as the username to the adaptive response? I don't know of any way to accomplish that.
| rest /services/authentication/current-context splunk_server=local
Should only return the context of the user who ran the search, so if you added this to the correlation search I'd be interested to see what it returns for the scheduler. Since the schedule is running its searches under its own user context.
Even if in your script example using environment variables, the environment variable would be based on the user who is running the script, it would not have information about other users on the system. Which is really the challenge .
If only one user is logged in at a time, then you could look for all users who have active logins:
| rest /services/authentication/httpauth-tokens splunk_server=local | fields userName
After excluding all the system userNames, assuming the correlation search has access to the rest endpoint, and that only one user is logged in....this would give you the username of a user logged into Splunk at the time the correlation search ran.
... View more
03-11-2019
09:06 AM
What ended up being the issue was that even though I told the add-on builder to not use a proxy, it was still using one from Splunks server.conf and splunk-launch.conf. I added the IPs/hostnames to the no proxy rule thinking that would work, but unfortunately it still didn't.
The final solution to my problem was adding the following lines in my Python code to 100% tell it not to use any proxy.
import os
os.environ['no_proxy'] = '*'
... View more
03-06-2019
09:24 AM
Putting an official answer on here for anyone else that is having issues with this. If I used dc on the signatures field and then modified the where clause to be total_signatures, it worked perfectly for me. I still have all of the variables that I need for adaptive response.
index=security*sep sourcetype IN (symantec:ep:proactive:file, symantec:ep:risk:file) | stats count by dest, signature, file_name, file_path, file_hash | stats dc(signature) AS total_signatures, values(file_name) as process, values(file_path) AS full_path, values(file_hash) AS sha256 count by dest | where total_signatures > 1
... View more
01-31-2019
12:18 PM
I've done quite a bit of research on this top and I've found this post from a few years ago which references George Starcher's blog post about it. I've gotten quite a ways into it but I've ran into an issue using my new search macro in the "Incident Review - Main" search.
Below are the steps I've completed so far.
Created a VirusTotal Adaptive Response Action that auto queries the domain of the notable event. This is working very well and I can get the results if I click on my VT notable event.
I created a vtpositives(1) macro that looks like this (I know it's not best practices for some of my search items, this is just a dev system) search index=_* OR index=* VirusTotal "queried url" $query$ source!=audittrail | table positives
When I run the macro from a search and input the URL, it shows the number of positive hits that VirusTotal shows up, which is the field I want to show up in additional fields under the notable event.
I modified the "Incident Review - Main" search to add vtpositives(1) right before the risk_correlation field that is currently last. I have tried both with the (1) and without it. I know that the "query" field populates correctly within the notable event and the VirusTotal results.
Once I go to click on the notable events, the page is 100% blank. It does not like my macro at all and prevents any search results from coming up. So my real question is how do I get the positives field out of my search macro and into the notable event?
For some reason my URLs are not working above so here they are.
- https://answers.splunk.com/answers/481995/splunk-enterprise-security-how-to-add-fields-to-no.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev
- http://www.georgestarcher.com/splunk-enterprise-security-enhancing-incident-review/
... View more
01-07-2019
06:44 AM
Thanks for the information. I'm going to see if I can slightly modify this and use my index search instead of the data model because in some scenarios I want to be a lot more granular on hosts in X location vs. Y where the data model gives me all of it.
... View more
12-03-2018
11:35 PM
If you have multivalue fields like src_nt_hosts then you could either use
| mvexpand src_nt_hosts
or
| eval src_nt_hosts=mvindex(src_nt_hosts,1)
... View more