Installation

Splunk standalone: How to split main data in multiple indexes?

juju
Explorer

I installed splunk standalone (9.0.4) with ansible https://github.com/splunk/splunk-ansible/ on Ubuntu jammy.

That has worked well. Data is ingested from port 9997 and for now, everything goes to main index.

I want to split things between multiple indexes aka windows, linux and other source types.

 

I think this would be through transforms as per https://docs.splunk.com/Documentation/Splunk/9.0.4/Forwarding/Routeandfilterdatad but this seems to be only valid for heavy forwarder role.
Or cluster master as per https://github.com/splunk/splunk-ansible/blob/develop/roles/splunk_cluster_master/tasks/configure_in...
In role variable, only found smartstore with an index array but I believe it is different.
I tried

* forwarding working with transform in /opt/splunk/etc/system/local/props.conf and /opt/splunk/etc/system/local/transforms.conf but nok

 

 

$ sudo cat /opt/splunk/etc/system/local/props.conf
# https://docs.splunk.com/Documentation/Splunk/9.0.4/Indexer/Setupmultipleindexes
[SOURCE1]
TRANSFORMS-index = SOURCE1Redirect
$ sudo cat /opt/splunk/etc/system/local/transforms.conf
[SOURCE1Redirect]
#REGEX = ,"file":{"path":"\/var\/log\/SOURCE1\/SOURCE1.log"}},"message":
REGEX = ^{.*SOURCE1.*}$
DEST_KEY = _MetaData:Index
FORMAT = SOURCE1

 

 


* get tcp data input losing all the json fields extract and only raw unusable data. Similar to https://community.splunk.com/t5/Getting-Data-In/Splunk-is-adding-weird-strings-like-quot-linebreaker...
* set data receiver in forwarding section and setting index in inputs.conf but not getting data ingested even if data received from tcpdump. And not found how to associate a specific receiver port to an index.

tried

 

 

$ sudo more /opt/splunk/etc/system/local/inputs.conf
[splunktcp://9997]
disabled = 0

[splunktcp://9525]
disabled = 0
index = sourcetype1

 

 

 

Any advices?

 

Thanks

Labels (1)
Tags (3)
0 Karma

juju
Explorer

Thanks all!

As said, I got it working with multiple splunk HEC collections.

I will check on cribl side if better way.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @juju ,

if one answer solves your need, please accept one answer for the other people of Community or tell us how we can help you.

Ciao and happy splunking

Giuseppe

P.S.: Karma Points are appreciated by all the Contributors;-)

woodcock
Esteemed Legend

gcusello
SplunkTrust
SplunkTrust

Hi @juju,

at first, it's possible to send data to different indexes that main only for new data, when a log is already  indexed in one index, it isn't possible to move it in another one, so for the old data, the only way is reindex all of them.

If instead you're speaking of new data, the best approach is to define the index value in the inputs.conf, so the first question is: how do you ingest your data?

you spoke of port 9997, this means that you take data from other Forwarders, so the best and easiest approach is to insert the option "index=<your_index>" in the inputs.conf of the Add-Ons that you're using to take data.

You can also override the index value on the Indexer using the method that you shared in your question and that it's described in many answers in the Community.

The only constrain is that this job must be done by the first Splunk Full instance (not Universal Forwarder) that the data pass through.

In other words, if in your architecture you have one or more intermediate Heavy Forwarders, you must put the index on them to override the index configuration and send data to the correct index.

If instead you don't have any HF, you can put it on the Indexer, in other words the affirmation that it's a role for HFs, in general, is wrong!

Then the transformation is never applied on Cluster Master!

Ciao.

Giuseppe

0 Karma

juju
Explorer

Thanks @gcusello 

Source is Cribl sending to Splunk Single instance as destination. There is no index option.

https://docs.cribl.io/stream/destinations-splunk

As splunk standalone, no other splunk servers.

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @juju,

the problem is that, reading the Cribl documentation, "From the perspective of the receiving Splunk Cloud instance, the data arrives cooked and parsed."

This means that it isn't possible to modify index assignment as described in documentation.

You should ask to Cribl support if it's possible to add the index definition on Cribl, 

Ciao.

Giuseppe

0 Karma

PickleRick
SplunkTrust
SplunkTrust

I've never used Cribl but as the data is supposed to be parsed, it means that with each event all metadata is sent along. So you should be able to set the index field within cribl pipeline.

 

juju
Explorer

I managed to split index with multiple splunk HEC and matching index as defined in /opt/splunk/etc/apps/search/local/inputs.conf and /opt/splunk/etc/apps/search/local/indexes.conf

Not sure if recommended way but so far, it works.

0 Karma
Get Updates on the Splunk Community!

Splunk Observability Cloud | Customer Survey!

If you use Splunk Observability Cloud, we invite you to share your valuable insights with us through a brief ...

Happy CX Day, Splunk Community!

Happy CX Day, Splunk Community! CX stands for Customer Experience, and today, October 3rd, is CX Day — a ...

.conf23 | Get Your Cybersecurity Defense Analyst Certification in Vegas

We’re excited to announce a new Splunk certification exam being released at .conf23! If you’re going to Las ...