- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Has anyone experienced thier sourcetypes not mapping correctly during production deployment but in test/local it is mapping properly?
- How did you identify the culprit?
- I would like to try using btool but am not very familiar on how to write the syntax to show me the results I need
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


When you say mapping correctly, do you mean aliasing, or do you mean using transforms at ingestion to rewrite the sourcetype before it is indexed? (based either on regex or by metadata perhaps..)
In the first case, sourcetype (ST) aliasing is done at search time and would thus be applied on your Search Head or Search Head Cluster.
The later, using props and transforms to change the data "in-flight", as in before its indexed to disk is a bit more tricky. You need to know where in the ingestion phase the data is cooked vs uncooked before index time. By this, I mean in complex deployments, there are typically intermediate forwarders before the indexers. Think a scenario like this:
UF ----> HF -----> IDX
In this case, if you are ingesting data at the UF, and then have props on your IDX to remap the sourcetype, this wont work as the data is cooked at the HF layer. Those props / transforms need to go at the nearest HF to the UF, or ingestion point.
To this point, its working in your dev environment most likely because your dev instance is a single node all-in-one box.
For btool, good thought. This should be the very first place you check!
General syntax is quite simple..
./splunk btool config list
Here config is the config file you want to check, e.g., props, transforms, authorize, web, inputs, outputs etc. And list, well this does just that, it lists out the contents as Splunk sees them on disk.
In addition to this, you can specify specific stanzas in those config files if you know this..
./splunk btool indexes list _internal
This lists out the configuration for the _internal index as it is applied by Splunk from the configurations it reads. You can use this with props / transforms in order to drill down on specific sourcetypes and the configurations applied.
HAppy hunting!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

We were able to find the root cause. Apparently the forwarder was configured to be a HF which was expecting the TA to be deployed in the HF as well.
So initially, it was communicated that we are working on a
UF -> IDX
turned out to be a
HF -> IDX
which makes the deployed TA in the IDX not able to map the sourcetype since it should have been deployed in HF.
Thanks for the great help @esix 🙂
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


General rule of thumb is that UF doesn’t do parsing. There are some exceptions to this with structured data sources like CSV and indexed extractions however.
So for you, these props and transforms do need to go on your indexers.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content


When you say mapping correctly, do you mean aliasing, or do you mean using transforms at ingestion to rewrite the sourcetype before it is indexed? (based either on regex or by metadata perhaps..)
In the first case, sourcetype (ST) aliasing is done at search time and would thus be applied on your Search Head or Search Head Cluster.
The later, using props and transforms to change the data "in-flight", as in before its indexed to disk is a bit more tricky. You need to know where in the ingestion phase the data is cooked vs uncooked before index time. By this, I mean in complex deployments, there are typically intermediate forwarders before the indexers. Think a scenario like this:
UF ----> HF -----> IDX
In this case, if you are ingesting data at the UF, and then have props on your IDX to remap the sourcetype, this wont work as the data is cooked at the HF layer. Those props / transforms need to go at the nearest HF to the UF, or ingestion point.
To this point, its working in your dev environment most likely because your dev instance is a single node all-in-one box.
For btool, good thought. This should be the very first place you check!
General syntax is quite simple..
./splunk btool config list
Here config is the config file you want to check, e.g., props, transforms, authorize, web, inputs, outputs etc. And list, well this does just that, it lists out the contents as Splunk sees them on disk.
In addition to this, you can specify specific stanzas in those config files if you know this..
./splunk btool indexes list _internal
This lists out the configuration for the _internal index as it is applied by Splunk from the configurations it reads. You can use this with props / transforms in order to drill down on specific sourcetypes and the configurations applied.
HAppy hunting!
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

hi esix.
It's thru transforms at ingestion..regarding the setup, we have something like UF(on-prem) --> IDX (cloud).. With this scenario, would it make any difference?
