I have a 4-server Splunk scenario:
On the index server, I placed the following 200-row csv file and successfully ingested it into the index foo_index using oneshot:
"DateTime","foo"
"10/1/2019 12:03:20 AM","cat"
.. 198 more similar rows
"10/1/2019 11:55:20 PM","dog"
After verifying that the 200 events were ingested, I edited the csv file and added 200 more rows:
"DateTime","foo"
"10/1/2019 12:03:20 AM","cat"
.. 198 more similar rows
"10/1/2019 11:55:20 PM","dog"
"10/2/2019 12:01:20 AM","mouse"
.. 198 more similar rows
"10/2/2019 11:59:59 PM","mouse"
Then, on the deployment server, I created a remote folder monitor and deployed it to the deployment client server which created \etc\apps\xxx\local\inputs.conf
on the deployment client server:
[monitor://D:\foo]
disabled = false
index = foo_index
sourcetype = csv
crcSALT = <SOURCE>
Then, I copied the CSV file to D:\foo
on the deployment client server (D:\foo
was empty prior to my dropping the CSV file into it)
The new 200 rows were not ingested.
Why not?
Ignore this question. As it turns out, Splunk did ingest the data. It just took many more hours (2 days, actually) that I would have expected for a few hundred rows from a csv file
Ignore this question. As it turns out, Splunk did ingest the data. It just took many more hours (2 days, actually) that I would have expected for a few hundred rows from a csv file
The problem is DEFINITELY NOT the fishbucket
because oneshot
bypasses this so any answer with crcSalt
is wrong (and couldn't work unless it is camel-cased correctly). Are you sure that it did not get sent? Did you set restartSplunkd
? If not, you need to do so and push it out again. Also fix this:
[monitor://D:\foo\*.csv]
Ignore this question. As it turns out, Splunk did ingest the data. It just took many more hours that I would have expected for a few hundred rows from a csv file
OK, come back and either click Accept
on this answer or post your own and Accept
that one to close your question.
Ignore this question. As it turns out, Soplunk did ingest the data. It just took many more hours that I would have expected for a few hundred rows from a csv file
when you are setup an input, you have to full path to read the file.
if the csv file name is sample.csv your input should be like this:
[monitor://D:\foo\sample.csv] OR
disabled = false
index = foo_index
sourcetype = csv
CRCSALT =
OR
this stanza [monitor://D:\foo*.csv] to index all the csv files
for further information please check this link:
https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf#MONITOR:
My requirement is that I monitor a folder because many csv files can be loaded to the folder, not just one.
The stanza you specified [monitor://D:\foo*.csv]
does not seem to be required:
See https://docs.splunk.com/Documentation/Splunk/7.3.1/Data/Monitorfilesanddirectories
:
"Specify a path to a file or directory
and the monitor processor consumes any
new data written to that file or
directory"
ok, no problem, I just mistyped this stanza
should be [monitor://D:\foo*.csv], this will take all the new csv files that should be added to this folder.
Thanks for your feedback.
Ignore this question. As it turns out, Splunk did ingest the data. It just took many more hours that I would have expected for a few hundred rows from a csv file
My point is that Splunk docs state that you do not need to specify a file name or filename wildcards to monitor for any file in a folder. All you need is the folder name:
[monitor://D:\SplunkSensorLogs]