Point 1. This will help in sourcetype naming conventions (People still use my_app_soucetype, meaning they use underscores, which isn't a problem, but this is the recommended naming convention, see li...
See more...
Point 1. This will help in sourcetype naming conventions (People still use my_app_soucetype, meaning they use underscores, which isn't a problem, but this is the recommended naming convention, see link below. https://docs.splunk.com/Documentation/AddOns/released/Overview/Sourcetypes Point 2. As there are a plethora of data sources, many are common and at some point, you will have a custom log source and will need to create your own sourcetype. These are some of the pretrained ones https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/Data/Listofpretrainedsourcetypes Many common sourcetypes come out of the box with the Splunk TA's so this is your starting point, these should be used and do not need to change them as they categorise the data which is important for parsing. For any custom data sources, you need to analyse your log source, check the format, timestamp, and other settings, and use props.conf to set the sourcetype with a naming convention standards, this makes admin life easier. See the below link https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Configuring_new_source_types Point 3. Syncing two different SH Clusters' mean you have one Deployer for each, that’s the Splunk setup. So, you need some kind of Repo like git where your KO's / Apps are located and keep that under code control. You can then Ansible to deploy the apps to the deployers for them to push the Apps. You could also use the linux rsync command to have a repo and sync with the deployers. So you should have a strategy for this type of app management based on the tools you use.