Hello,
So I have a forwarder installed on a server and it show up on Clients in Forwarder Management.
Then I create new app in depployment-apps, with local/input.conf like this
[monitor:///home/cnttm/Vibus/logTransit/application.log]
crcSalt = <SOURCE>
disable = false
index = mynewindex
[monitor:///home/cnttm/Vibus/logTransit/*.log]
crcSalt = <SOURCE>
disable = false
index = mynewindex
[monitor:///home/cnttm/Vibus/logTransit/*]
crcSalt = <SOURCE>
disable = false
index = mynewindex
[monitor:///home/cnttm/*]
crcSalt = <SOURCE>
disable = false
index = mynewindex
The log directory is: /home/cnttm/Vibus/logTransit/application.log
Then I create a server classes and apps, enable and restart it.
But when I search index=mynewindex, I dont have any result, and I'm pretty sure we have log in that directory.
Does anyone know anything wrong with my syntax? And how do I know/check if my deployment apps is working or not?
Hi @phamxuantung,
have the files to monitor always the same file name or it continously changes?
If it has always the same filename, you don't have to use crcSalt statement, that's usually used for files with different names containing almost the same logs.
Then, has the user you're using to run Splunk on the target the grants to access folders and files?
Then, what's the format of the timestamp in the logs ? if it's in european format (dd/mm/yyyy) or americn format (mm/dd/yyyy)
If it's in european format in the first 12 days of the month you could index your logs with a wrong timestamp.
Last check: if you run a cli command:
ls -al /home/cnttm/Vibus/logTransit/application.log
using the user you're using to run splunkforwarder, have you results?
Anyway, answering to your question, the only way is to try, there isn't any other way!
Last thing, you have overriding inputs in your inputs.conf, Splunk read a file once, so use only one of them.
Ciao.
Giuseppe
Sorry for the late reply, @gcusello
1. The file I want to monitor always have the same filename. I write those duplicate syntax to see if any of those can catch on, and slowly delete them and keep one that work.
2. In our company, the one who install the forwarder is on other department so I don't really know the user or how to run the CLI command.
3. The timestamp format of the log is (yyyy-mm-dd). Example
2022-08-30T13:50:01.193+0700 DEBUG Enter process
2022-08-30T13:50:01.205+0700 DEBUG
<isomsg direction="incoming">
<field id="0" value="0800"/>
<field id="7" value="0830065002"/>
<field id="11" value="102316"/>
<field id="32" value="971040"/>
<field id="70" value="301"/>
</isomsg>
Does this have anything to do with the getting logs? I thought Splunk can automate the indexing process. If not, should I add to config anything?
Also, I don't define sourcetype in my syntax in input.conf, would that be a problems?
Hi @phamxuantung,
ok, only for check, copy the log to ingest in a new file with a different name, and see if it's indexed.
If yes, choose one of the input stanzas and delete the crcSalt row.
One question. the fielname is always the same, but the content (especially the first 256 chars) are always the same or theyt change?
I suppose that they should change because the timestamp should change.
About the user, you have to check this point because it's a relevant one.
About your questions:
Does this have anything to do with the getting logs?
I thought Splunk can automate the indexing process. If not, should I add to config anything?
Also, I don't define sourcetype in my syntax in input.conf, would that be a problems?
Ciao.
Giuseppe
You need to reload the deployment server to push the add-on.
splunk reload deploy-server