Getting Data In

Is the following scenario possible? to have a more cost effective solution to onboard sysmon

charlottelimcl
Explorer

I would like to understand if the following scenario would be possible:

1. Security detection queries/analytics relying on sysmon logs are onboarded and enabled.

2. When the logs of a certain endpoint matches the security analytic, it creates an alert and is sent to a case management system for the analyst to investigate.

3.  at this point, the analyst is not able to view the sysmon logs of that particular endpoint. he will need to manually trigger the sysmon log to be indexed from the case management platform, only then he will be able to search the sysmon log on splunk for the past X number of days 

4. however, the analyst will not be able to search for sysmon logs of the other unrelated endpoints. 

 

In summary, is there a way that we can deploy and have the security detection analytics to monitor and detect across all endpoints, yet only allowing the security analyst to only have the ability to search for the sysmon logs of the endpoint which triggered the alert based on an adhoc request via the case management system?

Labels (1)
0 Karma

isoutamo
SplunkTrust
SplunkTrust
Theoretically yes, practically it’s vere very hard to do and it’s needs a lot of writing dashboards, reports etc. I really don’t suggest it.

What is the issue which you try to solve with this approach?
0 Karma

charlottelimcl
Explorer

Reason for the above is that i have hundreds of servers and if every server has its sysmon log indexed, it will take up a lot of bandwidth, storage space and cost. Hence i am looking to find a possible solution where splunk security detection analytics can run across all servers and triggers an alert for any positive hits, and only at the request of the security analyst, then the sysmon of a particular endpoint of interest will be indexed for the past 5 days for example. 

0 Karma

isoutamo
SplunkTrust
SplunkTrust

It seems that I understood your question little bit differently.

So you have separate system (not splunk), which are currently monitoring those source systems. When it found something then it create alert. After that log collection has started from this source system?

How you can be sure that you will get all needed old logs which are needed for analysis if you are starting collection after the event has happened? Some related activities could happened a long time ago before the event which create the alert!

0 Karma

charlottelimcl
Explorer

at the moment, the servers are monitored on splunk, but only win event log:security logs are being piped. I want to increase the monitoring capability to include sysmon and powershell logging, but, i do not want sysmon logs from ALL servers to be indexed and searchable, unless a security event warrants a particular server to have its sysmon indexed. 

 

i.e. 

1. all severs have sysmon enabled 

2. splunk's security analytics and detection queries runs in the background to monitor the sysmon, if there are any hits, it creates an alert on splunk and the alert log is indexed.

3. alert is sent to a case management system

4. at the request of the security analyst, he can request to view the sysmon of that particular server and that server' sysmon will be indexed on splunk for the past 5 days. 

5. analyst will not be able to view sysmon on splunk for the rest of the servers that are not indexed as it is unrelated to the security event. he can only index the sysmon of a particular server IF he triggers that action from the case management system

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Then you need just system which enables this log collection for this individual system. This depends how you are managing UF configurations. If you have DS then just add a new server class for this collection and add it into this system. If you are using something else then use it just like this.

Of course you must have separate UF app which is configured for this collection. Just inputs.conf file with suitable configuration.

Then when analysis has done just remove that UF app from this server with DS or your other cfg mgm system.

Probably you should have separate index for these logs with short retention period to get rid of those logs as they are not needed inside splunk?

Get Updates on the Splunk Community!

Message Parsing in SOCK

Introduction This blog post is part of an ongoing series on SOCK enablement. In this blog post, I will write ...

Exploring the OpenTelemetry Collector’s Kubernetes annotation-based discovery

We’ve already explored a few topics around observability in a Kubernetes environment -- Common Failures in a ...

Use ‘em or lose ‘em | Splunk training units do expire

Whether it’s hummus, a ham sandwich, or a human, almost everything in this world has an expiration date. And, ...