Our scenario in new deployment:
I thought, that separating log data according to the sending devices ip- address would be a quite straight forward scenario, but so far I have tested with several options in props / transforms suggested in the community pages and read documentation, and none of the solutions have been successful, all data is deposited to the “main” index.
If I put in indexes.conf defaultDB = <index name>, the logs are sent to this index, so the index itself is working and I can do searches in that index, but then all data would go to the same index…
What then is the correct way to separate data into two different indexes according to the sending devices IP- address or better still according to IP segment?
As I’m really new to Splunk, I do appreciate all advice if somebody here has done something similar and has insight on how to accomplish such a feat.
That's a slightly complicated setup. Unfortunately the UFs can send data "anywhere". You can try to fight it to some extent but in general with s2s you have the metadata field specified on the sending end and you don't have any network-level metadata. To some extent you could mitigate it with sending from UF with s2s over http (using httpout) and enabling indexes validation with s2s_indexes_validation option for specific HEC tokens. (but it works only with sufficiently recent Splunk versions).
As for the syslog data, I'd suggest doing the filtering and rerouting on the SC4S layer.