link textI really want to work with Carbon Black response data in Splunk. While the app will let me run direct queries for things that I already know, Splunk could create conditions that allow me to join on processes and network connections in ways that the Response console is incapable of.
The only problem that I have is that Response seems to generate too much data. Following instructions from RedCanary (https://www.redcanary.com/blog/carbon-black-response-splunk-integration/), we tried grabbing process starts and network connections.
6 minutes of data was 1GB --thats on track for 240GB/day (maybe a little less since things should be slower at night)
this is more than I can handle (by a lot). Does any one have experience restricting data coming from Carbon Black Response? Any tips or tricks?
My understanding is that the Carbon Black event forwarders only filter on event types, which I have already restricted to process starts and network connections.
Suppose it depends on what you want to achieve. For us, we just ingest our watchlist and feed data to Splunk, that way we are getting our core alerts and can alter our watchlists if we need to get anything more specific. This obviously means we can't use the full functionality of the app but really the alerts are all we want anyway. The cost of indexing all of that Cb data is too much so it's better to be specific. I also feel that for drilling down the CB response web interface is far more effective than using Splunk so we just pivot into that when needed.
How many endpoints do you have on your network? I would start with just feed/watchlist/alert hits in your situation, then add in process starts after you have a few days under your belt to determine volume.
That said, 1GB in 6 minutes still sounds very high. Can you share your cb-event-forwarder.conf file (eliminate any credentials from the file before posting)
Your configuration file is in line with what I would expect- that does look like it will eliminate all event types except for the process start, netconn, and process block event types. Just to make sure- you only see those event types in the Splunk console as well?
In that case, you may just have a very noisy environment (either lots of network connections, or Mac/Linux endpoints which create a lot more process events than Windows workstations). There's nothing else built in to the event forwarder to perform additional filtering, so you would have to trim more event types (you can use Splunk to determine the relative ranking of which event types are more prevalent in your environment).
yup, event_type is just netconn and proc (for the short amount of time that we tested it).
Netconn is about 10x proc event counts.
Is an approach to filter out some processes and their associated network connections?