Getting Data In

Log data separation according to host IP address

Sepe
New Member

Our scenario in new deployment:

  • One indexer server (Windows) (+one separate Windows server as search head)
  • One SC4S in Linux
  • Two customers
  • One customer with Windows / Linux servers, Win servers Security log data sent to Indexer with Universal forwarder installed to all servers, Linux servers sec log data sent to SC4S and then to indexer
  • Second customer with Windows / Linux servers, ESX, NW devices etc. Win servers log data sent to indexer with Universal forwarder installed to all servers, Linux and other sec log data sent to SC4S and then to indexer.
  • Both customers Universal forwarder data coming to the same default port 9997, SC4S sending to 514
  • Data from customers should be separated to two different indexes
  • Only differentiating thing in these customers is the IP address segments where the data is coming in.

I thought, that separating log data according to the sending devices ip- address would be a quite straight forward scenario, but so far I have tested with several options in props / transforms suggested in the community pages and read documentation, and none of the solutions have been successful, all data is deposited to the “main” index.

If I put in indexes.conf defaultDB = <index name>, the logs are sent to this index, so the index itself is working and I can do searches in that index, but then all data would go to the same index…

What then is the correct way to separate data into two different indexes according to the sending devices IP- address or better still according to IP segment?

As I’m really new to Splunk, I do appreciate all advice if somebody here has done something similar and has insight on how to accomplish such a feat.

 

Labels (5)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

That's a slightly complicated setup. Unfortunately the UFs can send data "anywhere". You can try to fight it to some extent but in general with s2s you have the metadata field specified on the sending end and you don't have any network-level metadata. To some extent you could mitigate it with sending from UF with s2s over http (using httpout) and enabling indexes validation with s2s_indexes_validation option for specific HEC tokens. (but it works only with sufficiently recent Splunk versions).

As for the syslog data, I'd suggest doing the filtering and rerouting on the SC4S layer.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Community Content Calendar, September edition

Welcome to another insightful post from our Community Content Calendar! We're thrilled to continue bringing ...

Splunkbase Unveils New App Listing Management Public Preview

Splunkbase Unveils New App Listing Management Public PreviewWe're thrilled to announce the public preview of ...

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...