Can you check splunkd logs for errors? Specially for that folder/file name?
index=_internal host=yourforwarder NOT log_level=Info
Can you check splunkd logs for errors? Specially for that folder/file name?
index=_internal host=yourforwarder NOT log_level=Info
Thanks Somesoni2, I was seeing
ERROR TailReader - File will not be read, is too small to match seekptr checksum (file= /Users/ybiyni/Desktop/Work/Text Files/SampleLog-2016-07-01-15_36_11.log). Last time we saw this initcrc, filename was different. You may wish to use larger initCrcLen for this sourcetype, or a CRC salt on this source.
After adding crcSalt = in inputs.conf I am able to read all the files. Thank you very much.
Now, the only issue remaining is extracting timestamp from the file names......:)
Are all source files named differently?
What do you get if you search index="myindex"|stats count by source
?
Are all fields in the file the same?
Hello singhh4, I get 27 files.
Anyways, I followed the directions from http://blogs.splunk.com/2009/12/02/configure-splunk-to-pull-a-date-out-of-a-non-standard-filename/ and added the following in /etc/system/local/datetime.xml
> <define name="_combdatetime3" extract="year, ignored_sep, month,ignored_sep1, day,ignored_sep2, hour, ignored_sep3, minute,ignored_sep4, second">
<!-- ... 2016-07-06-08_43_32 ...' -->
<text><![CDATA[(?:^|source::).*?(20\d\d)([-/_])(0\d|1[012])([-/_])([012]?\d|3[01])([-/_])([012]?\d)([-/_])([0-6]?\d)([-/_])([0-6]?\d)]]></text>
</define>
<datePatterns>
.....
<use name="_combdatetime3"/>
</datePatterns>
And as I want each file to be single event I have added the following to the profs.conf file
[mysinglefilesourcetype]
SHOULD_LINEMERGE = false
LINE_BREAKER = ((*FAIL))
TRUNCATE = 99999999
DATETIME_CONFIG = /etc/system/local/datetime.xml
I downloaded a new dataset of 279 files and added it to Splunk. For this dataset too, only 34 events were identified.....
The source file name are of type /Users/ybiyni/Desktop/Work/Text Files/SampleLog-2016-07-01-15_36_11.log . So, looks like I have been unsuccessful in creating the timestamps.....
Any suggestions?
I am seeing the # of files as 303 in Settings>Data Inputs> File & Directories view. Where as, ls -1 |wc -l shows 301 in the console.
I think , Splunk is unable to determine the timestamps of these files and hence is the error. I will try the suggestions from this Answers post to see if it helps.
https://answers.splunk.com/answers/311452/how-to-use-date-in-filename-as-the-timestamp-for-e.html
Perhaps that page is including . and ..
what is . and ..
Check out put of ls |wc -l
in console.
Also, see the list of files being monitored by Splunk from Console/Splunk server using this command
$Splunk_Home/bin/splunk list monitor
Where $Splunk_Home is the path where Splunk is installed. Will ask for admin credentials.
hello somesoni2,
ls |wc -l
returns 301 and all the 301 files are listed in
$Splunk_Home/bin/splunk list monitor
The list shown by the splunk list monitor command is the actual file being monitored. I've seen the wrong number in the UI, so I generally ignore it.
Thx.
Any idea why only 27 files are loaded as events? I am expecting the event count to be 301.
what exactly is your search when you "load the files" and see only 27?
For that you need to check your inputs.conf entries so ensure that all 301 files fulfill the criteria (if any filter is there), check the timestamp on the files content and ensure you time range in the query include all those timestamps etc.
I am not using any filters.....
Where are you seeing the value 303 in regards to your first question?