Getting Data In

Why is my index not being populated with my current configuration?

pipegrep
Path Finder

We've been chugging along fine with our 4 unreplicated indexers. I'd like to add a new index now, but have gotten stuck.

This app is successfully deployed from the deployment server:
/opt/splunkforwarder/etc/apps/throwaway_app/

cat /opt/splunkforwarder/etc/apps/throwaway_app/bin/inputs.conf 
[script:///opt/splunkforwarder/etc/apps/throwaway_app/bin/topn1.sh]
disabled = false
index = throwaway
# Run every 15 minutes
interval = 900
source = throwaway_top
sourcetype = script:///opt/splunkforwarder/etc/apps/throwaway_app/bin/topn1.sh

[monitor:///opt/logfile.log]
index = throwaway
disabled = false
sourcetype = throwaway.logfile

cat /opt/splunkforwarder/etc/apps/throwaway_app/bin/topn1.sh
#!/bin/bash
#top -n 1 | grep splun[k] | awk '{print $3" "$6" "$7}'
ps -ef

This is added to the end of a well used indexes.conf file and is successfully deployed to the indexers:

[throwaway]
homePath   = volume:primary/throwaway
coldPath   = volume:primary/throwaway/colddb
thawedPath = $SPLUNK_DB/throwaway/thaweddb
tstatsHomePath = volume:primary/throwaway/datamodel_summary
summaryHomePath = volume:primary/throwaway/summary
maxMemMB = 20
maxHotBuckets = 10
maxConcurrentOptimizes = 6
maxTotalDataSizeMB = 4294967295
maxWarmDBCount = 9999999
maxDataSize = auto_high_volume

The throwaway index is recognized and listed in this search with the settings I have put in indexes.conf

 | eventcount summarize=false index=* | dedup index | fields index

As mentioned above, data is not aggregating in the new index either when I search or when I look fora folder. I thought that new data would force the creation of the index folder structure, but nothing is getting created. I may be under the false impression that since we are not replicating data we are not using a master.

I've been reading through the docs, but everything seems to point to clustered (replicated?) indexers, which I don't have. Can someone help?

0 Karma
1 Solution

pipegrep
Path Finder

I didn't pick up any errors, but while poking at it this morning I decided to manually restart the splunk service on the indexers. After this the directories were created and entries from logfile.log were indexed. For some reason I throught this restart happened automatically, but I was wrong.

./splunk list monitor is handy, is there a similar argument to check other inputs, like scripted inputs? The help output for "list" is a little terse.

View solution in original post

pipegrep
Path Finder

I didn't pick up any errors, but while poking at it this morning I decided to manually restart the splunk service on the indexers. After this the directories were created and entries from logfile.log were indexed. For some reason I throught this restart happened automatically, but I was wrong.

./splunk list monitor is handy, is there a similar argument to check other inputs, like scripted inputs? The help output for "list" is a little terse.

Yasaswy
Contributor

Ah... restart is optional, there should be an option to "check" restart the forwarder on the deployment server when you deploy the app. You can also use btool to see what inputs are active Eg:
./splunk cmd btool inputs list

I think your issue might mostly be the app not being deployed to all the forwarders. Good to know it's resolved.

0 Karma

Yasaswy
Contributor

Hi, it would be difficult to pin point the exact issue with just the above information.... but if you have not set any clustering and just operating on distributed architecture .. it should have nothing to do with replication. Are the forwarders sending the data from the newly defined app? On the forwarder, you can check the splunkd.log and also run ./splunk list monitor to check if it has picked up your log file "/opt/logfile.log" ... just to make sure that the app has been picked up and inputs are being monitored.
Do you see any errors in the splunkd.log on your forwarders or indexers?

0 Karma

mattrkeen
New Member

You may also want to use ./splunk list forward-server aswell in order to make sure that the index server is being seen by the forwarder.

You can also search the index=_internal host="splunk_forwarder_servername" to make sure that the forwarder is reaching the indexer . A good result would be seeing metric logs, this means that the forwarder is actively communicating with the index

0 Karma

pipegrep
Path Finder

Interesting command. Everything is working as it should, but when I run it I get this;

/opt/splunkforwarder/bin/splunk list forward-server
Active forwards:
None
Configured but inactive forwards:
sljplgf01.jostens.com:9997
sljplgf02.jostens.com:9997

My data travels from UF on the host, to intermediate forwarder to Indexer. The sljplgf0* servers are intermediate. Are these only active when transmitting data? Again, I'm getting fresh data from the host regularly now.

0 Karma
Get Updates on the Splunk Community!

Cloud Platform & Enterprise: Classic Dashboard Export Feature Deprecation

As of Splunk Cloud Platform 9.3.2408 and Splunk Enterprise 9.4, classic dashboard export features are now ...

Explore the Latest Educational Offerings from Splunk (November Releases)

At Splunk Education, we are committed to providing a robust learning experience for all users, regardless of ...

New This Month in Splunk Observability Cloud - Metrics Usage Analytics, Enhanced K8s ...

The latest enhancements across the Splunk Observability portfolio deliver greater flexibility, better data and ...