Getting Data In

New CSV file not syncing with index

sshahu
Observer

I'm facing problem with Splunk like there is an index having  a folder of some csv file as a data input. when i'm adding another CSV file in that folder for that index, new source data is not showing in index. 
I have restarted Splunk many time and delete index and recreate but problem is still there.

Any Help would be great Thanks

Labels (3)
0 Karma

richgalloway
SplunkTrust
SplunkTrust
What are the inputs.conf settings for the folder?
---
If this reply helps you, Karma would be appreciated.
0 Karma

sshahu
Observer

I haven't added any configuration for that folder. 

Do i need to add any conf for that folder if yes please help me i'm new with splunk.

One more thing If i check the file count in settings> Data inputs for folder it is showing correct but when i search any query with mapped index then there is some problem and showing less file as expected.

Default Inputs.conf file is : 

[default]
index = default
_rcvbuf = 1572864
host = $decideOnStartup

[blacklist:$SPLUNK_HOME/etc/auth]

[blacklist:$SPLUNK_HOME/etc/passwd]

[monitor://$SPLUNK_HOME/var/log/splunk]
index = _internal

[monitor://$SPLUNK_HOME/var/log/watchdog/watchdog.log*]
index = _internal

[monitor://$SPLUNK_HOME/var/log/splunk/license_usage_summary.log]
index = _telemetry

[monitor://$SPLUNK_HOME/var/log/splunk/splunk_instrumentation_cloud.log*]
index = _telemetry
sourcetype = splunk_cloud_telemetry

[monitor://$SPLUNK_HOME/etc/splunk.version]
_TCP_ROUTING = *
index = _internal
sourcetype=splunk_version

[batch://$SPLUNK_HOME/var/run/splunk/search_telemetry/*search_telemetry.json]
move_policy = sinkhole
index = _introspection
sourcetype = search_telemetry
crcSalt = <SOURCE>
log_on_completion = 0

[batch://$SPLUNK_HOME/var/spool/splunk]
move_policy = sinkhole
crcSalt = <SOURCE>

[batch://$SPLUNK_HOME/var/spool/splunk/...stash_new]
queue = stashparsing
sourcetype = stash_new
move_policy = sinkhole
crcSalt = <SOURCE>

[fschange:$SPLUNK_HOME/etc]
#poll every 10 minutes
pollPeriod = 600
#generate audit events into the audit index, instead of fschange events
signedaudit=true
recurse=true
followLinks=false
hashMaxSize=-1
fullEvent=false
sendEventMaxSize=-1
filesPerDelay = 10
delayInMills = 100

[udp]
connection_host=ip

[tcp]
acceptFrom=*
connection_host=dns

[splunktcp]
route=has_key:_replicationBucketUUID:replicationQueue;has_key:_dstrx:typingQueue;has_key:_linebreaker:indexQueue;absent_key:_linebreaker:pars$
acceptFrom=*
connection_host=ip

[script]
interval = 60.0
start_by_shell = true

[SSL]
# SSL settings
# The following provides modern TLS configuration that guarantees forward-
# secrecy and efficiency. This configuration drops support for old Splunk
# versions (Splunk 5.x and earlier).
# To add support for Splunk 5.x set sslVersions to tls and add this to the
# end of cipherSuite:
# DHE-RSA-AES256-SHA:AES256-SHA:DHE-RSA-AES128-SHA:AES128-SHA
# and this, in case Diffie Hellman is not configured:
# AES256-SHA:AES128-SHA

sslVersions = tls1.2
cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA$
ecdhCurves = prime256v1, secp384r1, secp521r1

allowSslRenegotiation = true
sslQuietShutdown = false
0 Karma

richgalloway
SplunkTrust
SplunkTrust
Yes, you must add an inputs.conf stanza (in local, not default) for any file or folder you wish to monitor. Splunk only monitors itself out of the box.
---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...

New in Observability Cloud - Explicit Bucket Histograms

Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit ...