Getting Data In

Is there an order of precedence across servers (UF -> IDX -> SH)?

reswob4
Builder

There is a lot of information regarding the order of the precedence of props.conf within a single Splunk server, but how about between servers? I assumed it would be the last server (i.e. the SH), but I am having a problem where the time extraction works great in one spot but not in the SH despite the props.conf being the same. Let me explain.

I have this sequence: UF -> IDX cluster -> SH cluster.

I used the input wizard on one of the IDX servers to help create the props.conf file. The main reason for doing it this way was to get the correct config for the timestamp recognition (https://docs.splunk.com/Documentation/SplunkCloud/7.2.7/Data/Configuretimestamprecognition ). That config works just fine for any of that data sourcetype I uploaded to that IDX server.

I then copied that working props.conf to the UF AND pushed it out to the SH cluster.

But when I start indexing data via the UF, the timestamp field is ignored and Splunk uses the time of index instead.

So do I need to push the props.conf out to all the indexers also?

Is there any possibility that something else (an old config props.conf) on the SH is overriding the one from the UF?
If so, how would I find out?

Can btool pull from multiple servers to determine configuration conflicts?

Additional troubleshooting tips appreciated. Thanks in advance.

0 Karma
1 Solution

skalliger
Motivator

Hi reswob4 and welcome,

I suggest that you make yourself familiar with how the indexing process in Splunk works: https://wiki.splunk.com/Community:HowIndexingWorks
Timestamping occurs after the first parsing instance (merging pipeline), which is in your case the indexer (if there is no HF in between).
Timestamping settings can not apply on a Universal Forwarder and as such must be configured on a Cluster Master (in case your indexers are clustered).

Also, you don't want to upload anything to an IDX directly. Usually, your UI on an IDX isn't even enabled. It's a best practice to forward all your SH data to the indexing layer and as such, you could do test uploads on your SH.

Skalli

View solution in original post

0 Karma

arjunpkishore5
Motivator

If you use structured data extraction in the props.conf on the UF, these result in "pre-baked" events and they bypass any props.conf defined on the HF or indexer.These events pass directly to the indexers "Indexing queue" as opposed to the "parsing queue". Refer section 4 in the splunk wiki - https://wiki.splunk.com/Community:HowIndexingWorks

I normally stick to applying props and transforms on the Heavy Forwarder or the Indexer, since these provide better control over parsing and filtering events.

As for the precedence, this is determined by the direction of the data flow. since for each layer, the raw event is what it receives. Going by this, if you have a UF which forwards to a HF, which forwards to an indexer then UF > HF > Indexer > Search Head.
If your UF sends directly to Indexer then UF > Indexer > Search Head

0 Karma

woodcock
Esteemed Legend

We train all of our clients to just send props.conf everywhere. This almost never causes problems (there is 1 exceedingly unlikely edge case where it can) and is far less likely to than trying sort out what goes where. If you really would like to learn, see here:
https://www.aplura.com/cheatsheets/
Especially here:
https://www.aplura.com/cheatsheets/props_locations.html
https://www.aplura.com/cheatsheets/props_index-time_order.html

0 Karma

richgalloway
SplunkTrust
SplunkTrust

There's no such thing as a configuration conflict between servers. Certain properties apply at specific tiers and are ignored elsewhere. Other properties apply in multiple tiers, but the first server is the one that is effective. See https://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F for more information.

---
If this reply helps you, Karma would be appreciated.
0 Karma

skalliger
Motivator

Hi reswob4 and welcome,

I suggest that you make yourself familiar with how the indexing process in Splunk works: https://wiki.splunk.com/Community:HowIndexingWorks
Timestamping occurs after the first parsing instance (merging pipeline), which is in your case the indexer (if there is no HF in between).
Timestamping settings can not apply on a Universal Forwarder and as such must be configured on a Cluster Master (in case your indexers are clustered).

Also, you don't want to upload anything to an IDX directly. Usually, your UI on an IDX isn't even enabled. It's a best practice to forward all your SH data to the indexing layer and as such, you could do test uploads on your SH.

Skalli

0 Karma

reswob4
Builder

The document of how indexing works was great. I put the props.conf on the indexers and that did the trick.

Thanks. and yes, I know @arjunpkishore5 also mentioned that link. Thanks for everyone.

Even after using Splunk for 7 years, I'm still learning...

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...