Getting Data In

Indexer not indexing correctly

janet-wfs
Engager

Hi Support Team

I have two Splunk indexers and two forwarders.

Both forwarders have a configuration with index = test in inputs.conf, but there is configuration in the indexers to decide which index to put the data in based on the data itself (one of the values in the json object).

Forwarder 1 has been running for a while with no problems (this runs version 6.4.1)

Forwarder 2 is new (version 9.2.1), and requires exactly the same configuration as forwarder 1 which I have already done. The only difference is the host (host1 and host2).

The data from Forwarder 2 is being sent to the indexers, but the index is not changed based on the config in the indexers. The data goes to the test index as specified in the forwarder config. Both indexers are running 7.3.3.

What could I be missing to get the indexers to put the data from forwarder 2 in the correct index? Could this not be working due to the different versions of Splunk?

Thanks

Labels (1)
0 Karma
1 Solution

janet-wfs
Engager

An update: the problem was not the configuration of Splunk (so the mix of new and old versions seems to be OK in this case). 

The root cause was in the source data. Thanks for your help anyway PurpleRick.

View solution in original post

0 Karma

PickleRick
SplunkTrust
SplunkTrust

1. This is not Splunk Support service. This is a volunteer-driven community.

2. Your environment is really old.

3. We don't know what forwarder you're using (UF/HF), what configuration you have for your inputs and sourcetypes/sources/hosts on your components.

So the only answer you can get at the moment is "something's wrong".

But seriously - unless it's a forwarder which is built into some (presumably also obsolete) solution, you should definitely update this 6.4 UF to something less ancient.

As a side note - how have you verified that configs on those forwarders are "the same"?

0 Karma

janet-wifispark
Engager

Hi PicleRick

Thanks for your answer.

1. Is there such a thing as support service? I just posted this hoping to get some kind of help, wherever it comes from. 🤞If there is online support, I'd appreciate if someone could tell me how to contact them.

2. Yes, the environment is old. Strangely enough the only problem I'm having is with the 9.2.1 forwarder. The old 6.x is OK. And the indexers (7.3.3) are a bit old, but they do work.

Yes, I have also verified the configs are exactly the same. For that reason, I was wondering whether 9.2.1 required something new or different in the config.

Thanks

 

0 Karma

PickleRick
SplunkTrust
SplunkTrust

1. Well, if you have a valid contract with Splunk you're entitled to support. The support portal is here -> https://splunk.my.site.com/customer/s/ (but as far as I remember, you need to have an account associated with a valid active support contract so not just anyone can request support on behalf of your organization; I might be wrong though here, you need to verify that).

2. Since the 7.x line has been unsupported for some years now, it's hard to find compatibility matrix for such an old indexer and new forwarder. It generally should work, but it's definitely not a supported configuration (at the moment only supported indexer versions are 9.x).  But as long as both ends can negotiate supported s2s protocol version, they should be relatively fine.

_How_ did you verify the configs? btool?

 

0 Karma

janet-wifispark
Engager

By the way, the forwarder is a Universal one.

I created the config using the splunk commands on the command line, which I created based on Forwarder1.

I did verify it by comparing the inputs.conf and outputs.conf files. They are exactly the same.

I just changed the host name.

The data from Forwarder 2 does get sent to the Indexers correctly, but it just goes into the wrong index. That's the only problem I have. 

Both indexers have a props.conf with a stanza named with the source_type and a TRANSFORMS-routetoindex which points to a stanza in a transfroms.conf.

The source_type is exactly the same in both Forwarders.

Not sure if this will give you a clue to the cause of the problem.

Thanks

0 Karma

PickleRick
SplunkTrust
SplunkTrust

I did verify it by comparing the inputs.conf and outputs.conf files. They are exactly the same.

The files in etc/system/local (because that's where the splunk add monitor create entries as far as I remember) might be identical but you may be inheriting some settings from other configs.

That's why I asked about btool. Do

splunk btool inputs list --debug

and

splunk btool outputs list --debug

to see the effective config on both forwarders.

That's first thing to check.

Another thing is to verify the props/transforms to see if they - for example - don't match only specific subset of data which is matched by events coming from one host but not from the other. It's hard to advise something specific without knowing your config and your data.

 

0 Karma

janet-wifispark
Engager

Thanks again!

The btool command is new to me.

The results are much bigger in Forwarder 2, so there might be something in there.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

The output of each of those commands will contain settings for all outputs and all inputs respectively so the size of the output might differ. But if you find any given input or output stanza, you can compare the effective configs for this configuration element.

janet-wfs
Engager

An update: the problem was not the configuration of Splunk (so the mix of new and old versions seems to be OK in this case). 

The root cause was in the source data. Thanks for your help anyway PurpleRick.

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to November Tech Talks, Office Hours, and Webinars!

🍂 Fall into November with a fresh lineup of Community Office Hours, Tech Talks, and Webinars we’ve ...

Transform your security operations with Splunk Enterprise Security

Hi Splunk Community, Splunk Platform has set a great foundation for your security operations. With the ...

Splunk Admins and App Developers | Earn a $35 gift card!

Splunk, in collaboration with ESG (Enterprise Strategy Group) by TechTarget, is excited to announce a ...