Getting Data In

Why is my index not being populated with my current configuration?

pipegrep
Path Finder

We've been chugging along fine with our 4 unreplicated indexers. I'd like to add a new index now, but have gotten stuck.

This app is successfully deployed from the deployment server:
/opt/splunkforwarder/etc/apps/throwaway_app/

cat /opt/splunkforwarder/etc/apps/throwaway_app/bin/inputs.conf 
[script:///opt/splunkforwarder/etc/apps/throwaway_app/bin/topn1.sh]
disabled = false
index = throwaway
# Run every 15 minutes
interval = 900
source = throwaway_top
sourcetype = script:///opt/splunkforwarder/etc/apps/throwaway_app/bin/topn1.sh

[monitor:///opt/logfile.log]
index = throwaway
disabled = false
sourcetype = throwaway.logfile

cat /opt/splunkforwarder/etc/apps/throwaway_app/bin/topn1.sh
#!/bin/bash
#top -n 1 | grep splun[k] | awk '{print $3" "$6" "$7}'
ps -ef

This is added to the end of a well used indexes.conf file and is successfully deployed to the indexers:

[throwaway]
homePath   = volume:primary/throwaway
coldPath   = volume:primary/throwaway/colddb
thawedPath = $SPLUNK_DB/throwaway/thaweddb
tstatsHomePath = volume:primary/throwaway/datamodel_summary
summaryHomePath = volume:primary/throwaway/summary
maxMemMB = 20
maxHotBuckets = 10
maxConcurrentOptimizes = 6
maxTotalDataSizeMB = 4294967295
maxWarmDBCount = 9999999
maxDataSize = auto_high_volume

The throwaway index is recognized and listed in this search with the settings I have put in indexes.conf

 | eventcount summarize=false index=* | dedup index | fields index

As mentioned above, data is not aggregating in the new index either when I search or when I look fora folder. I thought that new data would force the creation of the index folder structure, but nothing is getting created. I may be under the false impression that since we are not replicating data we are not using a master.

I've been reading through the docs, but everything seems to point to clustered (replicated?) indexers, which I don't have. Can someone help?

0 Karma
1 Solution

pipegrep
Path Finder

I didn't pick up any errors, but while poking at it this morning I decided to manually restart the splunk service on the indexers. After this the directories were created and entries from logfile.log were indexed. For some reason I throught this restart happened automatically, but I was wrong.

./splunk list monitor is handy, is there a similar argument to check other inputs, like scripted inputs? The help output for "list" is a little terse.

View solution in original post

pipegrep
Path Finder

I didn't pick up any errors, but while poking at it this morning I decided to manually restart the splunk service on the indexers. After this the directories were created and entries from logfile.log were indexed. For some reason I throught this restart happened automatically, but I was wrong.

./splunk list monitor is handy, is there a similar argument to check other inputs, like scripted inputs? The help output for "list" is a little terse.

Yasaswy
Contributor

Ah... restart is optional, there should be an option to "check" restart the forwarder on the deployment server when you deploy the app. You can also use btool to see what inputs are active Eg:
./splunk cmd btool inputs list

I think your issue might mostly be the app not being deployed to all the forwarders. Good to know it's resolved.

0 Karma

Yasaswy
Contributor

Hi, it would be difficult to pin point the exact issue with just the above information.... but if you have not set any clustering and just operating on distributed architecture .. it should have nothing to do with replication. Are the forwarders sending the data from the newly defined app? On the forwarder, you can check the splunkd.log and also run ./splunk list monitor to check if it has picked up your log file "/opt/logfile.log" ... just to make sure that the app has been picked up and inputs are being monitored.
Do you see any errors in the splunkd.log on your forwarders or indexers?

0 Karma

mattrkeen
New Member

You may also want to use ./splunk list forward-server aswell in order to make sure that the index server is being seen by the forwarder.

You can also search the index=_internal host="splunk_forwarder_servername" to make sure that the forwarder is reaching the indexer . A good result would be seeing metric logs, this means that the forwarder is actively communicating with the index

0 Karma

pipegrep
Path Finder

Interesting command. Everything is working as it should, but when I run it I get this;

/opt/splunkforwarder/bin/splunk list forward-server
Active forwards:
None
Configured but inactive forwards:
sljplgf01.jostens.com:9997
sljplgf02.jostens.com:9997

My data travels from UF on the host, to intermediate forwarder to Indexer. The sljplgf0* servers are intermediate. Are these only active when transmitting data? Again, I'm getting fresh data from the host regularly now.

0 Karma
Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...