We have currently configured to send the logs to splunk cloud also we are setting up a DR on-perm server, now the question is how to configure the UF to send to both the cloud and DR (On-Perm). NO issues with the cloud environment. Is it possible to send it to both? On the UF the certificate is for splunk cloud and i am not sure how to add our on-perm certificate.
If the intention is cloning all data to both and you're okay with the double license ingest, you just need to configure outputs similar to this below.
There could be other TLS settings to include, but adding a comma-delimited list in [tcpout] will duplicate all logs to both groups listed, which can each have their own independent cert settings.
Another method is to create a "050_clone_app" with just the [tcpout] stanza, calling the exact name of the tcp group in the 100 UF cloud app and your other outputs app to your on-prem.
That way it's modular, can be managed with a DS, and when you're ready to cut one out you just delete the "050" app and the outputs you no longer want.
We do this all the time to migrate from one Splunk to another with a clone period during migration and testing.
outputs.conf
[tcpout]
defaultGroup = cloud_indexers, onprem_indexers
[tcpout:cloud_indexers]
server = 192.168.7.112:9998, idx2, idx3, etc
clientCert = $SPLUNK_HOME/etc/auth/kramerCerts/SplunkServerCert.pem
(retain settings from UF Cloud 100 app)
[tcpout:onprem_indexers]
server = 192.168.1.102:9998, idx2, idx3, etc
clientCert = $SPLUNK_HOME/etc/auth/kramerCerts/SplunkServerCert.pem
Thanks. We have one perm license too. This on-perm env will be used for 2 days every quarter.
Well, as @gcusello already pointed out - you'd be paying for both your Cloud ingest volume and your on-prem volume. If that's fine with you...
There are other possible issues though and whether you can do that depends on how you're sending your data.
1) You can't specify multiple httpout stanzas in your forwarder. So if you want to send using s2s over http, tough luck.
2) I'm not sure but I seem to recall that you can't send both to tcpout and httpout (you might try to search this forum for details)
3) So we're left with two splunktcp outputs. It should work but remember that blocking one output blocks both outputs.
4) It also gets even more tricky to maintain if you want to selectively forward data from separate inputs - you have to remember which inputs to route to which outputs.
Hi @narenpg ,
yes, it's possible but you pay twice the Splunk license.
You have to modify the outputs.conf to create a fork.
For more infos see at https://docs.splunk.com/Documentation/Splunk/9.3.2/Forwarding/Routeandfilterdatad
Ciao.
Giuseppe