Getting Data In

Log data separation according to host IP address

Sepe
New Member

Our scenario in new deployment:

  • One indexer server (Windows) (+one separate Windows server as search head)
  • One SC4S in Linux
  • Two customers
  • One customer with Windows / Linux servers, Win servers Security log data sent to Indexer with Universal forwarder installed to all servers, Linux servers sec log data sent to SC4S and then to indexer
  • Second customer with Windows / Linux servers, ESX, NW devices etc. Win servers log data sent to indexer with Universal forwarder installed to all servers, Linux and other sec log data sent to SC4S and then to indexer.
  • Both customers Universal forwarder data coming to the same default port 9997, SC4S sending to 514
  • Data from customers should be separated to two different indexes
  • Only differentiating thing in these customers is the IP address segments where the data is coming in.

I thought, that separating log data according to the sending devices ip- address would be a quite straight forward scenario, but so far I have tested with several options in props / transforms suggested in the community pages and read documentation, and none of the solutions have been successful, all data is deposited to the “main” index.

If I put in indexes.conf defaultDB = <index name>, the logs are sent to this index, so the index itself is working and I can do searches in that index, but then all data would go to the same index…

What then is the correct way to separate data into two different indexes according to the sending devices IP- address or better still according to IP segment?

As I’m really new to Splunk, I do appreciate all advice if somebody here has done something similar and has insight on how to accomplish such a feat.

 

Labels (5)
0 Karma

PickleRick
SplunkTrust
SplunkTrust

That's a slightly complicated setup. Unfortunately the UFs can send data "anywhere". You can try to fight it to some extent but in general with s2s you have the metadata field specified on the sending end and you don't have any network-level metadata. To some extent you could mitigate it with sending from UF with s2s over http (using httpout) and enabling indexes validation with s2s_indexes_validation option for specific HEC tokens. (but it works only with sufficiently recent Splunk versions).

As for the syslog data, I'd suggest doing the filtering and rerouting on the SC4S layer.

0 Karma
Get Updates on the Splunk Community!

Developer Spotlight with William Searle

The Splunk Guy: A Developer’s Path from Web to Cloud William is a Splunk Professional Services Consultant with ...

Major Splunk Upgrade – Prepare your Environment for Splunk 10 Now!

Attention App Developers: Test Your Apps with the Splunk 10.0 Beta and Ensure Compatibility Before the ...

Stay Connected: Your Guide to June Tech Talks, Office Hours, and Webinars!

What are Community Office Hours?Community Office Hours is an interactive 60-minute Zoom series where ...