I have a dashboard that analysts use to quickly triage events. Within this dashboard I have several panels that separate events based on correlation across different sensors, callbacks, and miscellaneous events such as binaries, certain types of web infections, etc. I need to add hash reputation to events triggered from binaries.
I have a custom Python script that accepts a hash value, submits it to a reputation cloud, and parses the response to determine reputation. We receive hundreds of hashes per hour from certain sensors, all of which get passed to this reputation cloud for reputation, and could cause a lookup table to become large. I've also had issues in the past with kv stores seeming to roll or lose data, though I won't rule out a configuration issue since I was using kv stores to implement event acknowledgments.
My question is - is there a way to use props / transforms / custom scripts to flag on events that have a hash, pass that hash to my custom script, and add a new reputation field to the event with the response value prior to indexing? I know this could potentially slow indexing and searching. Otherwise, is there a reliable way to use a lookup table or kv store to do this, understanding that my panels have a mixture of event types and that the table could grow large with millions of entries? Beyond displaying reputation data within the dashboard, we also need the ability to search only for events with a certain reputation and the ability to create metrics on this reputation data for leadership, so any type of rolling or aging off with lookup tables or kv stores isn't acceptable.
Ideally I would use the script I already have, but I guess my confusion is how do I get the reputation data into the events. We receive so many binary events that I feel like a lookup table would be inefficient and we've had issues with them in the past. The reason I wanted to add the field at index is to allow me to search or filter events based on reputation, generate charts and metrics on events by reputation, etc.
An external / scripted lookup works, instead of distributing a huge csv or maintaining a KV Store, a script is invoked with batches of input values to which the script returns the corresponding input to output value pairs (sets, whatever). It is a search time operation, but can be distributed to indexers. There is a sample script for doing lookups against DNS that ships with Splunk, Docs I found offhand, but I think there are better ones out there: http://docs.splunk.com/Documentation/Splunk/6.2.5/Knowledge/Addfieldsfromexternaldatasources#Externa...
I feel pretty dumb with this. Based on that link, what I would do is:
1) Configure the Python script to accept a hash as an input
2) Configure transforms.conf to the following
[replookup] ext_command = file_reputation.py hash fields = hash, reputation
3) I would call this command via
... | lookup replookup hash
Is this correct? This would assign a reputation at search time, but how would I perform operations over this data such as performing a
stats count on reputation values or create charts or just search for events with a certain reputation?
Perhaps I'm missing something in the implementation and use?
The panels the analysts review contain other events besides binary events, so Im not sure how to incorporate the "lookup" into this. This is a basic example of what a panel would display:
Time ID Signature Source SPort Destination DPort URL / Hash Reputation 2015-08-18 Callback 18.104.22.168 54321 22.214.171.124 80 badcallback.com N/A 2015-08-18 Binary 126.96.36.199 12356 188.8.131.52 80 badsite.com/badfile.exe 654654ae654f4554s34 Known-Malware
I have to admit, the exact mechanics of writing a scripted lookup I'm a bit fuzzy on, but I've seen it done several times by better python programmers than me. (So what you've written for getting the lookup going in steps 1 & 2 looks right, but I'd have to play around with it to know for certain.) Also if you're editing conf files, a reload or restart of Splunk is needed for changes to take effect.
Once you have the scripted lookup defined, you could invoke lookup explicitly in your search as you have described, and after running it perform stats count, or a where, or a search or anything else you want to do with the newly added field:
... | lookup replookup hash | stats count by reputation
... | lookup replookup hash | where reputation="Known-Malware"
With regards to other events, if things are setup right, any event without a hash (input field) just wouldn't get a reputation or you could set it to get a default "N/A" or something.
But also, once you have the scripted lookup working, and you have an automatic field extraction for the hash, then you configure an automatic lookup for your events as described: http://docs.splunk.com/Documentation/Splunk/6.2.5/Knowledge/Usefieldlookupstoaddinformationtoyoureve...
The benefit here is then all events are decorated automatically with the reputation when you search. So ideally you could then run searches like:
I have the Python script already written to accept input, so I should be good there. You can also issue a 'https://[splunk]/debug/refresh' to have Splunk refresh the config files without booting users. I need to figure out how to get the script to return the values to Splunk for Splunk to display. I looked at how external_lookup.py does it, but that method is not working for me for some reason.
I know about doing stats or where clauses based on the lookup, and I think you're right that if it hits an event without a hash it would return null.
The automatic lookup seems interesting, however, it seems like its based on a static lookup file. It's not clear how I would automatically trigger the Python script to pull the reputation and then decorate the event with that. Another path to investigate though, thanks.