Activity Feed
- Got Karma for Re: How do I determine why a Correlation Search isn't creating a notable event when I expected one?. 04-20-2023 05:58 AM
- Got Karma for Re: tstats WHERE clause filtering on CIDR only partially filters results. 04-25-2022 08:39 AM
- Got Karma for Re: Just updated to ES 6.4 and I need to edit a Threat Match Search. 02-25-2022 03:13 PM
- Karma Re: tstats can't access certain data model fields for scelikok. 08-03-2021 07:51 AM
- Posted Re: tstats can't access certain data model fields on Dashboards & Visualizations. 08-02-2021 07:26 AM
- Posted tstats can't access certain data model fields on Dashboards & Visualizations. 07-30-2021 01:23 PM
- Karma Re: Set a field with a constant value for marcoscala. 07-23-2021 08:52 AM
- Got Karma for Assets with overlapping DHCP Addresses Merging in ES 6. 04-09-2021 12:37 AM
- Posted Re: Just updated to ES 6.4 and I need to edit a Threat Match Search on Splunk Enterprise Security. 02-16-2021 10:14 AM
- Posted Just updated to ES 6.4 and I need to edit a Threat Match Search on Splunk Enterprise Security. 02-03-2021 08:43 AM
- Posted Re: How to parse JSON mvfield into a proper table with a different line for each node named for a value in the node on Getting Data In. 08-26-2020 01:45 PM
- Karma Re: How to parse JSON mvfield into a proper table with a different line for each node named for a value in the node for to4kawa. 08-26-2020 01:43 PM
- Posted How to parse JSON mvfield into a proper table with a different line for each node named for a value in the node on Getting Data In. 08-24-2020 08:34 AM
- Tagged How to parse JSON mvfield into a proper table with a different line for each node named for a value in the node on Getting Data In. 08-24-2020 08:34 AM
- Got Karma for Assets with overlapping DHCP Addresses Merging in ES 6. 06-05-2020 12:51 AM
- Got Karma for Assets with overlapping DHCP Addresses Merging in ES 6. 06-05-2020 12:51 AM
- Got Karma for Re: Assets with overlapping DHCP Addresses Merging in ES 6. 06-05-2020 12:51 AM
- Got Karma for Re: How to access non-threat Intelligence downloads as a file. 06-05-2020 12:51 AM
- Karma Re: How do I do a reverse DNS lookup in Splunk? for peiffer. 06-05-2020 12:50 AM
- Karma Re: Carriage return newline (\r\n) not working as delimiter for makemv for somesoni2. 06-05-2020 12:50 AM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
3 | |||
0 | |||
0 | |||
0 | |||
2 | |||
0 | |||
0 |
08-02-2021
07:26 AM
Thanks @scelikok - you made a good point there. We did edit the Authentication data model to include the indextime field, and it looks like when we pushed past CIM 4.16, we didn't get the update to Authentication. Just worth noting... reason and action are not the same field. Action is "success" or "failure", while reason is an explanation of that status- e.g., "Bad password" or "User is not in required group". Any suggestions for updating that data model? The only things I can think of would be to either remove my custom field and update the CIM overtop of it, or to just manually add the reason field to the data model as another customization
... View more
07-30-2021
01:23 PM
I need to be able to display the Authentication.reason field in a |tstats report, but for some reason, when I add the field to the by clause, my search returns no results (as though the field was not present in the data). Except when I query the data directly, the field IS there. I have tried this with and without data model acceleration to no avail. This search returns zero results: | tstats count from datamodel=Authentication by Authentication.user, Authentication.app, Authentication.reason This search returns results in the format I need, except I need to query multiple indexes via the data model index=<indexname> tag=authentication
| stats count by user, app, reason
... View more
- Tags:
- cim
- data model
- tstats
02-16-2021
10:14 AM
1 Karma
I resolved this issue by editing the Lookup Gen searches instead. I created a new macro and invoked it twice in each Lookup Gen search, once for url and once for http_referrer (just before they get mvjoined together), and it now generates the URL threat intel lookups without the protocol headers, and things are matching up the way they need to. rex field=$url$ "((?<url_protocol>[a-zA-Z]*:\/\/))?(?<$url$>.*)" | fields - url_protocol Hope this helps if anyone else has the same issue.
... View more
02-03-2021
08:43 AM
I need to manipulate some fields in the URL threat match search in Splunk ES 6.4, but am at a loss as for how to do so. When viewing the SPL at ES-> Data Enrichment -> Threat Intelligence Management -> Threat Matching , any changes I make to the SPL are not saved, and when I grep for snippits of the threat match search in the splunk/etc directory, I can't find where they are stored. Our cloud-based web proxy logs does not include the protocol header in the URL field. Since the Web data model requires this and several of our custom threat intelligence sources include this, we need to bridge the gap in order to perform threat matches from the Web.url and Web.http_referrer fields against threat intelligence. Previously, I had directly edited the Threat - URL Matches - Threat Gen search included some eval statements just before the threat_intel lookups to make the Web.url field into an mvfield including the three protocol headers we see in our threat intelligence, then mvjoining them into one field for whitelisting later on. Here's my additions to the original threat gen search: | eval url=mvappend("http://".url, "https://".url, "ftp://".url)
| extract domain_from_url
| `threatintel_url_lookup(url)`
| `threatintel_domain_lookup(url_domain)`
| eval url=mvjoin(url, " ") It wasn't the prettiest solution, but it was the only one we could come up with to get URL matches out of the Threat Intelligence framework. Since the old threat gen searches are deprecated, I replicated this effort with the code shown for the URL threat match search found at ES-> Data Enrichment -> Threat Intelligence Management -> Threat Matching | eval Web.url=mvappend("http://".'Web.url', "https://".'Web.url', "ftp://".'Web.url')
| lookup "threatintel_by_url" value as "Web.url" OUTPUT threat_collection as tc0,threat_collection_key as tck0
| lookup "threatintel_by_url_wildcard" value as "Web.url" OUTPUT threat_collection as tc1,threat_collection_key as tck1
| eval Web.url=mvjoin('Web.url', " ") However, I need to save my new version of the threat match search over the existing one. As stated above, I'm not sure how to do this. It seems like the SPL shown at ES-> Data Enrichment -> Threat Intelligence Management -> Threat Matching may be generated based on the various GUI options that are user-configurable. If this is the case, how can I ensure that my Web Proxy logs can be processed through the threat intelligence framework?
... View more
Labels
- Labels:
-
configuration
-
troubleshooting
08-26-2020
01:45 PM
THANK YOU. I've been wrestling with spath for a long time and this example made a lot of things click for me. Exactly what I was looking for (and what I have been looking for when dealing with JSON for ages)
... View more
08-24-2020
08:34 AM
I have run into this barrier a lot while processing Azure logs: I want to do something intuitive like
|stats count by appliedConditionalAccessPolicies{}.displayName, appliedConditionalAccessPolicies{}.result
but since there are multiple instances of each displayName-d policy per event and all of the sub-values that have the same name are MV-fielded together, my results are much less meaningful than I had intended.
I'm sure the answer to this involves |spath, but I'm struggling to wrap the examples I see here and here around my data.
Ideal result makes this:
|stats count by appliedConditionalAccessPolicies{}.displayName AS policy_name, appliedConditionalAccessPolicies{}.result AS result
produce something like this:
policy_name
application_policy
failure
12398
application_policy
success
9889898
phone_policy
success
1238988
... View more
Labels
- Labels:
-
JSON
04-14-2020
09:05 AM
1 Karma
I think that in our case, the problem is that the IP field is populated from AD, not an actual DHCP server. It's just the AD server's internal DNS record for the AD hostname, and since these are DHCP IPs managed by another service, there are often duplicates, and since ip is a key field, the new behavior is for ES to indiscriminately combine records with the same key fields. My monster correlated host records came from this process repeating over and over for several months.
I got a tip from someone at Splunk out of band that this might get cleaned up in 6.1.1, so I have largely set the issue aside until we can upgrade.
If it turns out this is just the new behavior for asset correlation, I'm just going to add some logic to the lookup generator search which uses ldapsearch to drop the ip field from any asset in a DHCP subnet.
... View more
03-31-2020
06:45 AM
3 Karma
We use SA-ldapsearch to pull Active Directory data into the ES Assets & Identity framework. We do not currently ingest DHCP logs, but the IP address last seen for an AD computer is pulled in as part of the ldapsearch lookup gen search (below). Having recently updated to ES 6 and Splunk 8, I'm noticing that workstations are being combined in the Asset KV stores (assets_by_str) if they share an IP address. Since IP addresses change at different times and many of our users work from home with or without VPN, this is a common occurrence. This leads to ridiculous results in investigation in which the "source_hostname" ends up being mapped from the source (DHCP) IP address in the search result to an MV field of 50-60 hostnames all of which at some point or another in history had that IP address.
I know that I can turn Asset correlation OFF in the ES configuration for Data Enrichment, but I don't want that, since hostnames are accurately resolved to user identities in many cases; also, old data is better than no data. I have considered conditionally eliminating IP addresses from our DHCP ranges by simply conditionally removing the IP record from the lookup gen search (below), but what I'm really looking for is a best practice. Is Splunk ES 6 designed to handle DHCP in some other way I'm not seeing? If not, this change seems asinine. No one could ever want the asset data for DHCP endpoints to be handled in this way.
| ldapsearch domain=default search="(&(objectClass=computer))"
| eval city=""
| eval country="US"
| eval priority="medium"
| eval category="normal"
| eval dns=dNSHostName
| eval owner=description
| rex field=sAMAccountName mode=sed "s/\$//g"
| eval nt_host=sAMAccountName
| makemv delim="," dn
| rex field=dn "(OU|CN)\=(?<bunit>.+)"
| eval requires_av="true"
| eval should_update="true"
| lookup dnslookup clienthost as dns OUTPUT clientip as ip
| join managedBy
[| ldapsearch search="(&(objectClass=user))"
| rename distinguishedName AS managedBy, sAMAccountName AS managed_by_user
| table managedBy managed_by_user]
| table ip,mac,nt_host,dns,owner,managed_by_user,priority,lat,long,city,country,bunit,category,pci_domain,is_expected,should_timesync,should_update,requires_av
| outputlookup ad_assets.csv
... View more
Labels
03-05-2020
09:31 AM
Did you ever have any success with this? We also use Lansweeper and are hoping to pull ticket and asset/inventory information into Splunk.
... View more
03-02-2020
12:43 PM
1 Karma
Thanks-- that's exactly what I was looking for!
... View more
02-28-2020
01:01 PM
I have configured ES to download the list of free webmail-hosting domains below as an intelligence download (Data inputs -> Intelligence Downloads). I don't want to trigger Threat Activity results based on these domains since they include common services like outlook.com, gmail.com, yahoo, etc., so I unchecked the Is Threat Intelligence checkbox when creating the file. It has successfully downloaded the file to splunk/var/lib/splunk/modinputs/threatlist/filename.txt , but I am at a loss for how to get it into a CSV for use in search. I tried to create a lookup definition in the GUI, but I presume that dialog is only able to see CSVs which are in the /lookups directories for various apps.
Does anyone have any suggestions for using my new intelligence file as a lookup? Thanks!
hxxps://gist.githubusercontent.com/tbrianjones/5992856/raw/93213efb652749e226e69884d6c048e595c1280a/free_email_provider_domains.txt
... View more
02-05-2020
11:53 AM
Just heard back from my Splunk support rep:
this is actually a known issue and there is currently an open bug about this.
SPL-179357 is the bug number, which reports this behavior, the workaround is the one you are already tested where NOT cidrmatch("127.0.0.0/8", ip)
either way, the problem has been isolated and fixed in version 8.0.2
I guess we're not crazy! Thanks for the help.
... View more
02-04-2020
11:41 AM
The same is true of the |tstats ... where NOT [|inputlookup] -- it works for the positive case but not for the negation.
... View more
02-04-2020
11:27 AM
1 Karma
Yep-- used the lookup name, not the lookup file (they have different names, and I double-checked the permissions!)
This does not work:
| search NOT dest_ip IN (10.0.0.0/8,172.16.0.0/12, 192.168.0.0/16)
This DOES work:
| where NOT cidrmatch("10.0.0.0/8",dest_ip)
I'm noticing that none of the |search based negation filters work. Note-- the affirmative versions of these searches all work. I can say things like dest_ip=10.0.0.0/8 or dest IN (10.0.0.0/8,172.16.0.0/12, 192.168.0.0/16) and it will work fine, but the negation is what isn't working.
This is in Splunk Enterprise 8.0.1.
... View more
02-04-2020
06:21 AM
OK, additional mystery-- the second search you proposed doesn't filter these either. And neither does
| search dest_ip!=10.0.0.0/8
or
| search NOT dest_ip=10.0.0.0/8
... View more
02-04-2020
06:11 AM
I really like the first approach you mention, but I can't seem to get it to work. I created the lookup and defined a lookup definition for CIDR(ip), but the subsearch with |inputlookup doesn't seem to be filtering on CIDR. Should it? It seems like the lookup definition would apply within the subsearch, but wouldn't help to match IPs against CIDRs when the subsearch is compared to the data. This is mostly guesswork on my part though-- I don't know the intricacies of subsearch filters or lookups, so there may be a different problem.
... View more
02-03-2020
11:33 AM
I have a dashboard which displays some simple "top 15" visualizations based on outbound network traffic. The base search just pulls some basic stats from All_Traffic, filtering in the tstats ... where clause to include only outbound traffic. I define "outbound" to be any traffic for which the source is an internal IP and the destination is NOT an internal IP.
This worked up until we upgraded from to Splunk 7.3.1 to 8.0.1, but now the clause filtering out All_Traffic.dest_ip!=10.0.0.0/8 , etc. are completely ignored (running the same search with and without the condition return the same results without the desired filtering)
Here's the original base search:
| tstats count(All_Traffic.dest_ip) AS ip_count count(All_Traffic.dest_port) AS port_count from datamodel=Network_Traffic where (All_Traffic.src_ip=10.0.0.0/8 OR All_Traffic.src_ip=192.168.0.0/16 OR All_Traffic.src_ip=172.16.0.0/12) AND NOT (All_Traffic.dest_ip=10.0.0.0/8 OR All_Traffic.dest_ip=192.168.0.0/16 OR All_Traffic.dest_ip=172.16.0.0/12) by All_Traffic.dest_ip, All_Traffic.dest_port
| rename All_Traffic.* AS *
A simpler version with only one exclusion in the tstats ... where clause which also does not work:
| tstats count(All_Traffic.dest_ip) AS ip_count count(All_Traffic.dest_port) AS port_count from datamodel=Network_Traffic where All_Traffic.dest_ip!=10.0.0.0/8 by All_Traffic.dest_ip, All_Traffic.dest_port
| rename All_Traffic.* AS *
This seems very similar (but not identical) to the problem described in the release notes for 8.0.1 as fixed:
SPL-179594, SPL-177665 - tstats where clause does not filter as expected when structured like "WHERE * NOT (field1=foo AND field2=bar)"*
Also seems related to the question here: hxxps://answers.splunk.com/answers/760542/why-only-one-condition-works-for-where-clause-in-a.html
Similar to the asker above, I am hoping to do the filtering in the WHERE clause of the tstats for performance. I run this search over the past 24h and it takes a while to run. I'd rather not split the tstats by src_ip and have to reaggregate with another stats, and would prefer to do the filtering BEFORE passing the stats to |search.
I can work around it if I have to (the search below DOES work), but I'd rather go with something a bit more performant.
| tstats count(All_Traffic.dest_ip) AS ip_count count(All_Traffic.dest_port) AS port_count from datamodel=Network_Traffic by All_Traffic.dest_ip, All_Traffic.dest_port, All_Traffic.src_ip
| rename All_Traffic.* AS *
| where (cidrmatch("10.0.0.0/8",src_ip) OR cidrmatch("172.16.0.0/12",src_ip) OR cidrmatch("192.168.0.0/16",src_ip) OR cidrmatch("169.254.0.0/16",src_ip)) AND NOT (cidrmatch("10.0.0.0/8",dest_ip) OR cidrmatch("172.16.0.0/12",dest_ip) OR cidrmatch("192.168.0.0/16",dest_ip) OR cidrmatch("169.254.0.0/16",dest_ip))
... View more
01-22-2020
11:37 AM
Ended up altering the threat_gen search for URL matches to make the url field an mvfield with the three most likely url protocol prefixes. The only line I added to the stock code was | eval url=mvappend("http://".url, "https://".url, "ftp://".url) , shown in line below. This allows the lookups and the domain extraction to function properly, and the analyst is able to review the original log to see the actual protocol used. If you have Zscaler and need a cleaner solution, you can probably add "Web.transport" to the |tstats ... by clause and actually build the correct value with an eval case statement, but I went with the mvfield option for its simplicity.
| `tstats` values(sourcetype) as sourcetype,values(Web.src),values(Web.dest) from datamodel=Web.Web by Web.http_referrer
| eval url='Web.http_referrer'
| eval threat_match_field="http_referrer"
| `tstats` append=true values(sourcetype) as sourcetype,values(Web.src),values(Web.dest) from datamodel=Web.Web by Web.url
| eval url=if(isnull(url),'Web.url',url)
| eval threat_match_field=if(isnull(threat_match_field),"url",threat_match_field)
| stats values(sourcetype) as sourcetype,values(Web.src) as src,values(Web.dest) as dest by url,threat_match_field
| eval url=mvappend("http://".url, "https://".url, "ftp://".url)
| extract domain_from_url
| `threatintel_url_lookup(url)`
| `threatintel_domain_lookup(url_domain)`
| search threat_collection_key=*
| `mvtruncate(src)`
| `mvtruncate(dest)`
| `zipexpand_threat_matches`
... View more
01-17-2020
10:54 AM
1 Karma
I was able to solve this by manually adding _indextime to the data model as an eval field. For anyone else with this problem, my solution is below...
Steps:
Deaccelerate DM
Edit Datasets
Add Field
Eval Expression
Make Eval Expression "_indextime" and both Field Name and Display Name "indextime"; set Type="Number", and leave Flags="Optional" and save
Reaccelerate DM
The two searches below should return the same results (might be off by just a few events since relative times are used.) Note that since the field is part of the data model, it must be referenced as Authentication.indextime instead of just _indextime as before. Additionally you cannot use conditions like "_index_earliest" and "_index_latest" the same way, but must manually filter on the value further on in the search.
With |tstats
| tstats count from datamodel=Authentication by Authentication.user, Authentication.action, Authentication.indextime, _time, sourcetype span=1m
| rename Authentication.* as *
| where indextime>relative_time(now(), "-65m")
Without |tstats
| from datamodel:Authentication | bucket _time span=1m
| stats count by user, action, _time, indextime sourcetype
| where indextime>relative_time(now(), "-65m")
... View more
01-15-2020
01:03 PM
Thanks for the reply. I guess I now AM blaming the DMA, but not because it is not shared-- only because it seems to be excluding the _indextime field.
... View more
01-15-2020
01:02 PM
OK, I think I have isolated the problem to the Data Model Acceleration. When removing data model acceleration on the original search head, the search works properly (returning all expected results.) When reaccelerating the data model, it goes back to returning minimal or zero results.
Is _indextime normally available in the tsidx files? I have read other answers on this site that seem to imply it is (e.g., hxxps://answers.splunk.com/answers/540344/how-to-compute-indextime-time-difference-average-w.html)
... View more
01-13-2020
05:35 AM
I don't think the DMA is to blame. It works on the search head with the same Data Model but with no acceleration, but the search head which DOES have a DMA is not functioning as expected.
... View more
01-10-2020
11:00 AM
I believe this problem may be related to Data Model Acceleration? In our Splunk environment, we have two (non-clustered) search heads directed at the same indexer. One has a number of CIM data models accelerated (including Authentication; for use in ES), and the other does not. I have determined that the search above (the first one) DOES work properly on the other search head. Unfortunately, I need it to work in the case of the accelerated data model.
I have read that _indextime is an internal meta field. Is it implicitly "accessible" to an accelerated data model tstats search, or do I need to explicitly add it to the data model?
... View more
12-09-2019
01:30 PM
We use the zScaler proxy product and have it configured with NSS to collect logs in Splunk Enterprise. We also download the PhishTank URL watchlist into the Threat_Intelligence framework in Enterprise Security. We have a problem because the URL field in our zScaler logs is stored differently than the IoCs in the PhishTank list.
A zScaler log for an HTTPS connection might look like either of these:
`url=google.com/images
protocol=HTTPS
url=google.com/images
protocol=SSL`
while PhishTank would provide an IoC like this:
https://google.com/images
Glancing at the Web data model, it seems like the expectation is that the URL field includes the protocol, so it seems like the logs are what need to be fixed, not the threat list (see hxxps://docs.splunk[.]com/Documentation/CIM/4.14.0/User/Web)
At first glance, it seems like the easy fix would just be to alias the "url" field to have the "protocol" field in front of it, ( url_new=protocol."://".url_old or similar) but in cases where the protocol field is SSL, it needs to be slightly more complicated. I could write this as an |eval statement, but I'm not exactly sure where to put it to put it into effect. Would this be a field extraction?
In any case, I need to conditionally add "https://" to the url field for SSL or HTTPS, while adding "http://" in cases where the protocol field is HTTP.
Many thanks!
... View more
12-03-2019
11:16 AM
You're the man! It hadn't even occurred to me to rex the XML out and treat it separately-- thanks!
... View more