Getting Data In

Splunk and Log File Injection?

Lowell
Super Champion

Are there any resources or white papers out there that talk about Splunk and LFI (Log File Injection)? Splunk makes it amazingly easy to push data in with minimal effort, but what if you want more control and assurance that input data has not been tampered with or modified? How do you prevent (or reduce the likelihood of) log file injection?

I've found a few non-Splunk publications on the topic and was curious how this looks in Splunk. Several of the kinds of attacks I've read about don't really work with Splunk, but some would, and quite frankly Splunk opens the door to a few new kinds of attacks not possible in the typical log-file approach. (Which I'm sure is true of other central logging tools as well.)

This question originally came up in a discussion about which Universal Forwarders are "allowed" to send data to Splunk. It looks like the options are limited to the acceptFrom (network source restriction) and requireClientCert for certificate-based restrictions. But even if both are enabled, that doesn't stop log injection from occurring if the application or OS producing the logs has been compromised. And in many cases, compromising an app is less the issue, just a poorly written app that doesn't do proper escaping is enough to create undesirable behavior. Since this topic is growing, and I'm the kind of person who desires answers to all of life's questions, I'm requesting help from those who are wiser than I!

BTW, I'm mainly looking at pre-indexing considerations. I know that Splunk has event signing and audit trails to protect the data once it's been indexed, but how how do you protect the data before that point? And from a search time perspective, how to you deal with the reality that this probably will occur at some point? Because it seems like if you aren't paying attention, this stuff would be really easy to miss.

Possible Log Injection considerations

Here are a few different specific security considerations / scenarios I've come up with. There's no guarantee that all of these approaches would even work, but the main point is that there are a wide number of possibilities for attack that are Splunk specific, but at this point I haven't seen much discussion about it.

  • Dynamic Splunk header - Allows meta data properties to be set or overwritten. Thankfully this is now must be enabled explicitly. This basically allows a stream to get redirected and appear to come from some other origin. (Often this is quite helpful, but it could be used to "hide" or mislead.) A malicious user could append "***SPLUNK*** index=_internal" causing an event to only be assessable to admin users and not the network security team. (Sending events to a non-existent index may draw too much attention due to the resulting warning banners.)
  • Time order vs. sequence - Splunk stores data in reverse time order and does not maintain original event ordering. This eliminates one possible means of detecting log injection. (Maybe _indextime analysis could find some anomalies, but it's not a direct comparison.) See the Log Injection Attack and Defense link below. Remember that indexed order doesn't always map directly to original log file sequence in a distributed Splunk environment.
  • Newline/delimiter injection - Like in many other tools, if a logging source allows arbitrary data to be logged, it's possible to create new events in Splunk. Additionally, depending on the props settings, it's possibly that simply injecting an extra timestamp could cause event breaking to occur, even for application prevents against injecting newlines.
  • Splunk Forwarders - All forwarders allow metadata to be overwritten (host/source/source/index). Therefore a forwarder can lie about who it is. Now the receiving Splunk instance will capture connection information (like originating IP) and metrics about a rouge forwarder connection, but there's no easy way to determine what "host"s came from which forwarder. (A forwarder can legitimately be send multiple hosts names; if it's running the DB Connect app, or a collecting syslog records; and yes, syslog itself has a similar problem.) Setting requireClientCert=true would prevent unauthorized connections, but it doesn't guarantee the host does lie about itself; for example, if a system containing a UF was hijacked. And unless Splunk augments the forwarder's signature hash to each event, it's difficult to track down where a given even originated.
  • Tricking search-time detection - If a malicious user has inside knowledge of alerting and triggering criteria (or knows which publicly available apps are installed), and they also have access to add arbitrary (or somewhat arbitrary) content to a message there are many ways detection could be circumvented (especially if knowledge objects are poorly implemented):
  1. Exploit exclusion logic - If a (poorly written) search is looking for login failures, but also includes an expression like "NOT debug" (to exclude an useless message some developer refuses to disable them), then bypassing detection is a simple as injecting the word "debug" anywhere in the log message. (Using the search "NOT log_level=DEBUG" is a step better, but may still be circumvented, depending on search time extraction settings.)
  2. auto-kv - Adding extra "key=value" pairs to an event could circumvent detection, possibly by overriding an expected field. (I'm guessing there's some trickery that could be done with the "tag" field too, I'm not certain.)
  3. field aliases - If a search that relies on a specific field that's really an alias, it's possible to use a "key=value" pair to avoid detection. For example, if an anomaly detection search performs a stats operation which groups by the src field (which is really just an alias for src_ip), then if somehow "src=bogus" could be injected into the event, then the grouping logic could be controlled by an externally dictated value instead of the system captured IP address.
  4. punct - Any search relying on a specific punctuation pattern could be compromised by adding frequently allowed fields. For example, adding a period, dash, comma or space. The same is true for poorly written regex field extractions.
  • Triggered Scripts - Again, someone with inside knowledge could, for example, take a poorly written shell script that was intended to block firewall access for a given IP and instead use it to launch their own custom commands simply by injecting a log message. This depends on several layers of flaws but it's conceivable. (An app that allows arbitrary content to be logged, loose field extraction settings, and a poorly implemented script; but that's not too far fetched.)

Resources

gkanapathy
Splunk Employee
Splunk Employee

Your number one defense against this is to get the files off the source server as soon as possible and into Splunk. After that, you should have reasonable assurance that the log was as written. Any other tampering would result in _indextime anomalies. These actually can be detected, e.g., by alerting on any event where _indextime - _time is significantly different from the average _indextime - _time. Most of the other "attacks" are just ways to try to hide log messages (e.g., trying to sneak "DEBUG" into the record of you unauthorized login) but there's really nothing you can do about that without being more rigorous in your known formats. Fundamentally that is equivalent to a vulnerability in your log formats and queries that you'd have to design to avoid, but fortunately that is actually easy to do if you're actually in a position to do anything about it. The reason you have all those "tricks" and tools in Splunk is basically because they're there for dealing with crap log formats, which, yes, are going to be vulnerable to this kind of thing.

Lowell
Super Champion

I've attempted to re-ask a more specific question on the topic of compromised forwarders. More specifically, how would you track down the problem server: http://answers.splunk.com/answers/116535/how-to-track-down-a-dishonest-uf

0 Karma

Lowell
Super Champion

I agree that much of this is a log creation problem and that getting the logs to a central (and secure) location ASAP resolves many issues. (It's not possible to hide your tracks my modifying logs if Splunk already indexed a copy.) That's huge. But anytime you depend on rule-based detection, someone clever can find away around it. Of course, Splunk shines because you can change the rules (search logic, knowledge objects, ...) and re-check for past anomalies. With that said, I was hoping someone wrote or complied a list of tips, techniques, methodologies, and best practices on the topic.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...