Log4J Query:
index=*
| regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)"
| eval action=coalesce(action_taken, elb_status_code, status)
| where NOT (cidrmatch("192.168.0.0/16",src_ip) OR cidrmatch("10.0.0.0/8",src_ip) OR cidrmatch("172.16.0.0/12",src_ip)) OR Country="United States"
| iplocation src_ip
| eval notNULL=""
| fillnull value="unknown" notNULL, src_ip, dest_ip, action, url, Country
| stats count by src_ip, Country, dest_ip, url, action, sourcetype
| sort - count
This checks anywhere where there is a sign of the Log4J exploit being used. I've done field extraction on any sourcetypes returned by my previous query:
index=*
| regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)"
| stats count as "exploit attempts" by sourcetype
| sort - "exploit attempts"
I extracted fields so that I can get a table with src_ip, Country, dest_ip, url, action, sourcetype, and count. I want to then use this query in subsequent queries to get information on if the exploit was successful, and if there is any other communication that follows.
The query works and I get results like this (fake results):
src_ip | Country | dest_ip | url | action | sourcetype | count |
248.216.243.59 | Unknown | 192.168.1.148 | 192.168.1.148/ | blocked | firewall | 3 |
207.191.80.208 | US | 192.168.1.216 | 192.168.1.216/ | allowed | firewall | 2 |
It starts out by doing millions of events every few seconds and slows down to doing thousands every few seconds.
Some info from logs:
If there is any more information I can add, then feel free to ask and I will edit.
I suspect part of the problem is index=*. Searching every non-internal index you have is going to be slow, so much so that the construct is not allowed in many shops. The search is probably running fast at first until memory pressure slows things down.
Try searching only the indexes known to contain log4j events. You probably can skip the Cisco logs, for example.
I should point out that only failed log4stuff exploits will be in the logs. Successful exploits are silent.
I suspect part of the problem is index=*. Searching every non-internal index you have is going to be slow, so much so that the construct is not allowed in many shops. The search is probably running fast at first until memory pressure slows things down.
Try searching only the indexes known to contain log4j events. You probably can skip the Cisco logs, for example.
I should point out that only failed log4stuff exploits will be in the logs. Successful exploits are silent.
This makes sense. I'm relatively new to Splunk, but the memory piece makes a lot of sense here. I'm studious in wanting to know why you think only failed exploits will show up. A lot of the logs I'm looking at are Blocked and Allowed attempts. At least in terms of most of them being found in web requests which are all logged.
What I've read about log4stuff has said successful JNDI calls are silent, but perhaps your environment is different.
Under some scenarios, this could be true. For example if you're looking at network or firewall logs:
If you can't find the JDNI string in the logs that log4j has written to, it means that the target log4j is not configured to log that information and is not vulnerable.