Getting Data In

HF no longer send internal to SC after change outputs.conf: need indeAndForward?

SplunkExplorer
Contributor

Hi Splunkers, on one Splunk Environment I follow we implemented the filtering and route strategy. As described in another post here on community, we worked on our HFs and configured them to capture a subset of data for each sourcetype and send them to a UEBA solutions.

Due some issues, we worked with ODS and now we achieved our purpose. All data continue to be sent to Splunk Cloud and a subset of them goes to UEBA. This has been achieved changing 3 files on HF:

  • outputs.conf
  • props.conf (on addon where specific sourcetype is configured)
  • transforms.conf (on addon where specific sourcetype is configured)

By the way, after a first config, we saw that on Splunk Cloud we were no longer able to see the _internal logs related to HF. I mean: if we launched this search on SH:

 

index=_internal host=<HF hostname>

 

no result was get. I underline that, before change the outputs.conf file, we were able to see them on SH.
I searched on google and here on community I found some topics that state, in a nutshell, that this behavior could be normal if outputs.conf is changed to add other destinations for logs.

So, ODS suggested us to add, in the HF's outputs.conf, the parameter indexAndForward=true.
We followed the suggestion and after that we were able again to see _internal logs on SH but, as expected, on one HF we get the error message of lower disk space available; that has lead the HF to stop forwarding and getting _internal logs.
So, we changed indexAndForward to false, stopped for now UEBA forwarding and HF starts again to produce _internal and send log to Splunk Cloud.

So, going to a conclusion, my final question is: due the indexAndForward parameter cause, of course, a disk consumption (cause the HF start to index data like an indexer), how can we achieve our purpose? I mean, how can we mantain or change in outputs.conf to send data on our UEBA and, at the same time, continue to see HF _internal logs on SH?

Labels (1)
0 Karma
1 Solution

gcusello
SplunkTrust
SplunkTrust

Hi @SplunkExplorer,

the indexAndForward parameter hat the action to locally index a copy of your data on the HF while sending logs to the destination (Splunk Cloud in your case), 

for my knowledge there's no relation with the missing _internal logs.

The only reason for your main issue I can suppose is that there's a congestion in log sending and _internal has a minor precedence then other logs; the causes could be the network, or insufficient resources on the HF (CPU and disk).

Then, when you configured the "fork" so send data to two destinations, did you also inserted _TCP_ROUTING parameter in your inputs.conf files?

could you share your outputs.conf and iputs.conf files?

Ciao.

Giuseppe

View solution in original post

PickleRick
SplunkTrust
SplunkTrust

Ok, depending on your configuration, you might not need to use _TCP_ROUTING at all. Quoting the specs for outputs.conf:

# If you specify multiple servers in a target group, the forwarder
# performs auto load-balancing, sending data alternately to each available
# server in the group. For example, assuming you have three servers
# (server1, server2, server3) and autoLBFrequency=30, the forwarder sends
# all data to server1 for 30 seconds, then it sends all data to server2 for
# the next 30 seconds, then all data to server3 for the next 30 seconds,
# finally cycling back to server1.
#
# You can have as many target groups as you want.
# If you specify more than one target group, the forwarder sends all data
# to each target group. This is known as "cloning" the data.

So if you define two separate target groups, event should be pushed to both of them.

The indexAndForward setting tells the splunk component whether additionally to sending the data to output(s) it should also index it locally. So if this parameter is set to false, your splunk server works as HF, if it's true you have an indexer.

As @gcusello already mentioned - check your bandwidth limits because you could be congesting your outputs (but default maxKbps for HF is 0 which means unlimited so it's unlikely).

One thing that bothers me though is why do you send data straight to UBA. As far as I remember UBA connected to search-heads and performed searches, right? But I haven't touched UBA for a long time so I might be misremembering something.

SplunkExplorer
Contributor

We do not use the Splunk UBA, but a third party service, the Exabeam UEBA.
I appreciate all your explanation, I'm going to share them today with my colleagues.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Ahhh. OK. Since we're on Splunk forum, I assumed you were talking about Splunk's UBA.

When you have a hammer everything looks like a nail 😉

BTW, that Exabeam UEBA receives data via s2s or you're using plain tcp output?

0 Karma

SplunkExplorer
Contributor

Exabeam has a component called Site Collector that can be compared, more or lesse, to a Splunk HF; it can receive data in many way included syslog, both tcp and udp.
We configured a tcp syslog enabling and until now, after many works, we are able to receive data on both Splunk Cloud (all data) and UEBA (a subset of data for some sourcetypes).

Based on what Giuseppe and you explained me, because until now we didn't need to modify the inputs.conf, our first change will be to modify the indexAndForwarding parameter; I expect that, event putting it equal false, the double forwarding should be fine. Otherwise, we will modify also the inputs.conf file.

I'll keep both you updated 😉

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @SplunkExplorer,

the indexAndForward parameter hat the action to locally index a copy of your data on the HF while sending logs to the destination (Splunk Cloud in your case), 

for my knowledge there's no relation with the missing _internal logs.

The only reason for your main issue I can suppose is that there's a congestion in log sending and _internal has a minor precedence then other logs; the causes could be the network, or insufficient resources on the HF (CPU and disk).

Then, when you configured the "fork" so send data to two destinations, did you also inserted _TCP_ROUTING parameter in your inputs.conf files?

could you share your outputs.conf and iputs.conf files?

Ciao.

Giuseppe

SplunkExplorer
Contributor

You were right giuseppe: the indexAndForwarding setting does not work on _internal. Simply, we found that previous config contained an error fixed when we add ALSO the indexAndForwarding parameter equals true, and this has lead us to believe that the problem was correlated.

We set indexAndForwarding=false and data continue to go correctly to SC and UEBA. I also confirm that changing the inputs.conf with _TCP_ROUTING key was not required for our environment.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

To add to this - by default there are whitelists/blacklists in place on tcpout:

forwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = _.*
forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker)

That means that all the expicitly defined indexes will get forwarded while other indexes with names beginning with an underscore will not. So you should not see any problems with _internal index forwarding (unless you broke that yourself ;-)).

0 Karma

SplunkExplorer
Contributor

Hi Giuseppe,

currently I have no access to system where files are located, I have to wait Monday.
I understand what you are speaking about, is what we discussed on my topic and via message; at the end we didn't put the _TCP_ROUTING because ODS told us that is not needed (I posted them the post where you encountered the issue and solved it); the reason is, about them, that cause we are using a TCP forwarding for both destination (Splunk Cloud and UEBA) and we are not using syslog, the explicit setting in inputs.conf is not required. But from your word I understand that, every time you split the forwarding, is required, so for sure I'm going to apply the change Monday. I'll keep you updated.

Thanks a lot for you help, precious as always.

0 Karma
Get Updates on the Splunk Community!

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...

Alerting Best Practices: How to Create Good Detectors

At their best, detectors and the alerts they trigger notify teams when applications aren’t performing as ...

Discover Powerful New Features in Splunk Cloud Platform: Enhanced Analytics, ...

Hey Splunky people! We are excited to share the latest updates in Splunk Cloud Platform 9.3.2408. In this ...