Getting Data In

How to configure Splunk to store indexed data in Linux server?

vnguyen46
Contributor

Hi,
Migrating to new Splunk Enterprise hardware, I have all core instances up and functioning. Now it comes to the point where I am not sure how to configure Indexers running on Linux OS to properly store the indexed data (hot/cold). I didn't move data or conf files from old to new indexers.

Sample stanza from the indexes.conf file:
[f5_asm]
homePath = volume:primary/f5_asm/db
coldPath = volume:cold/f5_asm/colddb
thawedPath = $SPLUNK_DB/f5_asm/thaweddb
frozenTimePeriodInSecs = 15768000

df -h on one of the indexers shows:
[splunk@xxxx ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 94G 0 94G 0% /dev
tmpfs 94G 0 94G 0% /dev/shm
tmpfs 94G 2.7G 92G 3% /run
tmpfs 94G 0 94G 0% /sys/fs/cgroup
/dev/mapper/rhel-root 79G 2.8G 72G 4% /
/dev/sdc2 976M 127M 783M 14% /boot
/dev/sdc1 200M 9.7M 191M 5% /boot/efi
/dev/sdb1 8.7T 84M 8.7T 1% /splunkdata/hot
/dev/sda1 31T 20K 31T 1% /splunkdata/cold
/dev/mapper/rhel-opt 99G 94G 110M 100% /opt (*QUICKLY FILLED UP***We mapped this dir to a new dir /opt/splunk/ with 1TB)
/dev/mapper/rhel-var 99G 1.2G 93G 2% /var
/dev/mapper/rhel-home 50G 64M 47G 1% /home
naspsnfs:/linux_admin 1.4T 686G 697G 50% /apps/admin/share
tmpfs 19G 0 19G 0% /run/user/139454
tmpfs 19G 0 19G 0% /run/user/114464

Is there something not right in this configure and how can I store indexed data to correct file directories?

Thank you,

0 Karma
1 Solution

adonio
Ultra Champion

you have to define the volumes in indexes.conf
my guess is that this is volume locations:
/dev/sdb1 8.7T 84M 8.7T 1% /splunkdata/hot
/dev/sda1 31T 20K 31T 1% /splunkdata/cold

here is an example from docs:
https://docs.splunk.com/Documentation/Splunk/8.0.1/Admin/indexesconf#indexes.conf.example

volume definitions; prefixed with "volume:"

[volume:hot1]
path = /mnt/fast_disk
maxVolumeDataSizeMB = 100000

[volume:cold1]
path = /mnt/big_disk
# maxVolumeDataSizeMB not specified: no data size limitation on top of the
# existing ones

[volume:cold2]
path = /mnt/big_disk2
maxVolumeDataSizeMB = 1000000

View solution in original post

0 Karma

richgalloway
SplunkTrust
SplunkTrust

What is the definition of volume:primary?

---
If this reply helps you, Karma would be appreciated.
0 Karma

vnguyen46
Contributor

On search peer (indexer): ./slave-apps/uth_indexer_volume_indexes/local/indexes.conf
[volume:primary]
path = /opt/splunk/var/lib/splunk
maxVolumeDataSizeMB = 1500000

[volume:_splunk_summaries]
path = /opt/splunk/var/lib/splunk/summaries

~ 100GB

maxVolumeDataSizeMB = 100000

[volume:cold]
path = /opt/splunk/frozen

5TB with some headroom leftover (data summaries, etc)

maxVolumeDataSizeMB = 5200000

Is there any change on the path that I need to make?

Thank you,

0 Karma

vnguyen46
Contributor

I see this app on the deployment server being pushed to all indexers. On the DS, it's located at:
/opt/splunk/etc/master-apps/uth_indexer_volume_indexes/local/indexes.conf

Assuming that no change can be made on the directory from OS level, how can I change these paths in this indexes.conf file to direct the indexed data to the defined directories?

Thank you,

0 Karma

adonio
Ultra Champion

you have to define the volumes in indexes.conf
my guess is that this is volume locations:
/dev/sdb1 8.7T 84M 8.7T 1% /splunkdata/hot
/dev/sda1 31T 20K 31T 1% /splunkdata/cold

here is an example from docs:
https://docs.splunk.com/Documentation/Splunk/8.0.1/Admin/indexesconf#indexes.conf.example

volume definitions; prefixed with "volume:"

[volume:hot1]
path = /mnt/fast_disk
maxVolumeDataSizeMB = 100000

[volume:cold1]
path = /mnt/big_disk
# maxVolumeDataSizeMB not specified: no data size limitation on top of the
# existing ones

[volume:cold2]
path = /mnt/big_disk2
maxVolumeDataSizeMB = 1000000
0 Karma

vnguyen46
Contributor

You pointed to the right direction and I try to configure out how to change paths to match the designated directories. Thank you,

0 Karma

vnguyen46
Contributor

There is an app to define volume:primary and other stanza like cold/summary that pushed down to the indexers. You can control the log storage and location from that app.

Thank you everyone.

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...