Hi, I am SE for a company, for our customers, we have log files available via an URL API, ex https://logs.company.com/api, we use a Python script to obtain our logs, the script is run on a Splunk Forwarder which sends the various logs to a Splunk Enterprise. This all works fine. I have an external customer that uses this method described above but now they want to switch to Splunk Cloud. Previously, their Forwarder and Indexer were installed on premises. My question: what is best method to get the same logs into Splunk Cloud? Should we use the existing Forwarder and re-run it to point from local Enterprise to new Splunk Cloud? Or should we use the Splunk Add On Builder? What is confusing me are knowing when or not to use the Add On Builder? I hope I provided enough info for just a high level architecture decision.
Why would we build and use a Splunk Add On instead of using a Forwarder?
Current: URL API Logs are extracted via a Python script (with a TOKEN) running on a local Linux Forwarder which sends the logs to a local premises Splunk Indexer.
New Option 1: URL API Logs are extracted via Python script (with TOKEN) running on a local Linux Forwarder which sends the logs to Splunk Cloud
New Option 2: URL API Logs are extracted via Splunk Add On (with TOKEN?) running <where?> which sends the logs to Splunk Cloud.