Getting Data In

Why is my regex in transforms.conf for source in props.conf not working for one of three Indexers?

teedilo
Path Finder

I'm trying to use a regex in a transforms.conf file on the Indexer to prevent indexing of informational and debug messages in specific files. The messages are in this format:

2018-11-30 13:10:55,474 INFO blah blah blah
2018-11-30 13:10:55,474 DEBUG blah blah blah

There are three Indexers in our environment. I have this coded in the props.conf files on the Indexers:

[source::...*(plain|debug|startup).log*]
TRANSFORMS-null12 = setnull12

... and this in the transforms.conf files:

[setnull12]
REGEX = ^\d+-\d+-\d+\s+\d+:\d+:\d+,*\d+\s+([a-zA-Z0-9]+\s+)?(INFO|DEBUG)\s+.*
DEST_KEY = queue
FORMAT = nullQueue

The undesirable messages are no longer getting indexed by two of the Indexers, but they are still getting indexed on the third Indexer.

I've used btool to compare props.conf and transforms.conf files on all three Indexers. There are no differences in transforms.conf files and only inconsequential differences in props.conf files ("pulldown_type = true" set for some sourcetypes for two of the three Indexers, and some unrelated learned sourcetypes defined on one of the Indexers where the rules are working).

Does anyone have any ideas of what might be wrong or how I might go about troubleshooting this? I realize it's impossible for anyone to say for sure without a complete picture of our configuration files, but any ideas would be appreciated.

0 Karma
1 Solution

bjoernjensen
Contributor

Hey,

assuming indexer_1 and indexer_2 are working as intended. Further: indexer_3 has been restarted, and still not doing as you want:

Is there a way to test a log file from indexer_1 on indexer_3? Keeping everything in that file as is (EOL, encoding, etc)?
Next you could try the reverse: trying a not working log file from indexer_3 on indexer_1.

This way you might be able to focus on analyzing: either the logs or the configs.

All the best,
Björn

View solution in original post

vr2312
Contributor

Hey @teedilo , did you try installing the app from scratch and also ensuring there is no configurations that is created previously that are overriding the right transforms/props parameters ?

0 Karma

teedilo
Path Finder

Sounds like a good suggestion. Something I'll consider if upgrading the Forwarders that are having this problem doesn't fix the issue.

0 Karma

bjoernjensen
Contributor

Hey,

assuming indexer_1 and indexer_2 are working as intended. Further: indexer_3 has been restarted, and still not doing as you want:

Is there a way to test a log file from indexer_1 on indexer_3? Keeping everything in that file as is (EOL, encoding, etc)?
Next you could try the reverse: trying a not working log file from indexer_3 on indexer_1.

This way you might be able to focus on analyzing: either the logs or the configs.

All the best,
Björn

martin_mueller
SplunkTrust
SplunkTrust

Slightly related - turn your indexers into an indexer cluster to have the cluster master manage their configuration. Then you won't have to worry about different configuration between indexers.

As for your forwarders, check whether the "old" ones are universal forwarders or heavy forwarders. If they're heavy then they do their own parsing and the indexers don't. Consider converting them to universal if that's the case and there's no good reason to keep them heavy.

teedilo
Path Finder

Good idea on the Indexer cluster, Martin. I had heard about this capability but I believe our Indexers are running on a version that doesn't include that support. (I'm a little embarrassed to say where we are at. Upgrading is so painful because of the many Forwarders that we have that require going through another team to gain access to them.)

The Forwarders are universal so at least we're good there.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Indexer clusters have been around since 2012 / 5.0... if you're still on 4.x you really should upgrade.

0 Karma

teedilo
Path Finder

Well we're actually on 5.0.1 so it sounds like we could take advantage of this. It's just that our group doesn't have the resources for a full or even half time Splunk administrator so it's difficult to stop and smell the roses. A coworker and I have already spent more time on Splunk than we can really afford. SO much administration.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

Please please please don't start indexer clustering on 5.x today. UPGRADE! Such features, much convenience, wow.

0 Karma

teedilo
Path Finder

Thanks, no, I should have said I wasn't seriously considering doing this on 5.x. We're stuck on 5.x because it's so painful to upgrade our Forwarders (since we need to work through another team on this) and apparently the Indexer can't be that much ahead of the Forwarders in versions. We'll probably get around to upgrading eventually -- just trying to find the time.

0 Karma

martin_mueller
SplunkTrust
SplunkTrust

See http://docs.splunk.com/Documentation/Forwarder/7.2.1/Forwarder/Compatibilitybetweenforwardersandinde...

A 5.0 forwarder can talk to up to 6.5 indexers well, newer indexers need a bit of work around SSL.
For indexer clustering, 6.5 is lightyears ahead of 5.0.

0 Karma

teedilo
Path Finder

Thanks again, Martin. I'm familiar with that article. We still have some Forwarders running 4.x, but I hope we can take time to upgrade everything soon.

0 Karma

teedilo
Path Finder

Thanks for the suggestion, Björn. That sounds like a worthwhile troubleshooting exercise though I'm not really familiar with having a given log file being processed by multiple Indexers. Something I'll need to figure out, I guess.

I'll hold off accepting this as the final answer to my question to allow for any other ideas, though I concede that I probably can't expect much else given the amount of info I have to provide.

bjoernjensen
Contributor

Thats fine with me 🙂

Are all indexer running on the same plattform / version? And are all sources inputed the same way (all monitoring stanzas?)

As for platforms, there might always be a whole set of pitfalls:
- EOL: LF vs CRLF
- paths: forward slash vs back slash
- ...

Good luck

teedilo
Path Finder

Thanks again for the great questions and suggestions, Björn. You gave me an idea to try switching the server in the outputs.conf files on the Forwarders to point to only the Indexer that is not doing the filtering as expected, check that behavior, and then switch the server to point to the two Indexers where the filter was working as expected. I noticed the same bad behavior when the server was pointing to the two Indexers where the filtering was previously working as expected. However, I then noticed the bad behavior was only occurring for a few Forwarders. It turns out that the Forwarders whose logs weren't being filtered properly were running on a very old version of Splunk. I'm not sure why the logs from these Forwarders wouldn't still be handled properly by the props.conf and transforms.conf file changes on the Indexers, but I'm betting that the old version of Splunk on the affected Forwarders has everything to do with this problem. I'll upgrade these Forwarders at some point and see whether that fixes the issue.

I went ahead and marked your answer as accepted. Thanks again.

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...