Getting Data In

One Search Head ignores props.conf One Does Not

dbray_sd
Path Finder

We have 3 clustered indexers and an original Search Head. Installed an app that has a custom props.conf on the Search Head, and it is NOT showing the extracted proper fields when performing searches.

Deployed a new Search Head and installed the same exact app. The new Search Head shows the proper fields. The two servers appear to be identical, and running:

splunk cmd btool props list --debug

...shows the same exact results, line by line, for the app. The original server does have some extra apps, but with the results of the btool above, it would not appear there are any conflicts with other apps.

What would be the next steps in troubleshooting the original Search Head, and why it does not show the proper fields?

Labels (1)
0 Karma
1 Solution

dbray_sd
Path Finder

Unfortunately, I had to give up on my investigation, and ended up re-installing the broke SH.

I removed Splunk, reinstalled the same rpm, reinstalled the same troublesome app, and everything worked (as expected). Then, I moved one app back in at a time, each time checking the original broke props.conf/app and each time everything continued to work. So, no real good idea what was causing the issue, but everything is working now.

View solution in original post

dbray_sd
Path Finder

Unfortunately, I had to give up on my investigation, and ended up re-installing the broke SH.

I removed Splunk, reinstalled the same rpm, reinstalled the same troublesome app, and everything worked (as expected). Then, I moved one app back in at a time, each time checking the original broke props.conf/app and each time everything continued to work. So, no real good idea what was causing the issue, but everything is working now.

manjunathmeti
Champion

hi @dbray_sd,

Check in which app context you are running the searches. In the app-user context, configurations in the currently running app take precedence over all other apps configurations even though btool lists those.

From the Splunk documentation:

Precedence within the app or user context:
For files with an app/user context, directory priority descends from user to app to the system:

1. User directories for current user -- highest priority
2. App directories for currently running app (local, followed by default)
3. App directories for all other apps (local, followed by default) -- for exported settings only
4. System directories (local, followed by default) -- lowest priority

0 Karma

dbray_sd
Path Finder

Not really sure I understand exactly what you are asking to confirm. The app doesn't matter. I can duplicate this issue in our customized app, in the Search & reporting app, or any other place. The particular app that contains the specific props.conf file is set with Global permissions.

In the new working SH, it also doesn't matter where or how I perform the searches. It always uses the Global permissions on the props.conf file, and always works.

0 Karma

richgalloway
SplunkTrust
SplunkTrust

Remember that btool shows the current configs *on disk* rather than those currently in use.  IOW, btool shows what Splunk will load the next time it restarts.  Have you tried restarting the old SH?

---
If this reply helps you, Karma would be appreciated.
0 Karma

isoutamo
SplunkTrust
SplunkTrust

Did it behave same way independent which user is running your query? It could be that in old sh there is private props definition which is not present in new sh.

r. Ismo

0 Karma

dbray_sd
Path Finder

Good theory, but no, that's not it. All users are having the issue on the old server, but not the new.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

You have same roles on both SHs without any search filters defined? So search logs are identical?

0 Karma

dbray_sd
Path Finder

Yes, same roles same, LDAP settings, same everything (as far as I can tell). No search filters defined.

0 Karma

dbray_sd
Path Finder

Sorry, yes of course. I've even gone as far as updating the entire environment to the latest Splunk version, to make sure it was not some older bug. So now, everything is upgraded to the latest released version, and restarted.

0 Karma
Get Updates on the Splunk Community!

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...