Sorry for the complete noob question. But I have had this splunk project dropped on me and I need to spin up fast.
I have added a monitor on "myhost" like so:
[root@myhost bin]# pwd /apps/splunkforwarder/bin [root@myhost bin]# ./splunk add monitor /var/log/foo/ Your session is invalid. Please login. Splunk username: admin Password: Added monitor of '/var/log/foo'.
That was yesterday.
I executed a script that writes data to a log file that is in the /var/log/foo directory on myhost.
But when I execute this search
host=myhost I get zero results.
Since you didnt specify an index, splunk will, by default, place your data in the 'main' index. The server would check in under its hostname/ip address so you could this in your host= parameter.
So you could try
index=main host=<myhost> or <ipdress>
If you want to find out the hsotname of the forwarder:
./splunk show default-hostname
then pass this hostname in your search:
Best practice is to simply create an inputs.conf file either under /system/local or /etc/apps//local and monitor files that way assuming you have configured the outputs.conf to send data to the indexers (unless standalone-all-in-one box).
Thanks for the reply!
Here's what I tried:
[root@myhost bin]# ./splunk show default-hostname Default hostname for data inputs: myhost.
Then I tried this search:
But I still got no results.
@iiooiiooiioo cheeck if your forwarder (myhost) are actually sending data at all to the _internal index.
ALternatively, check to see if the main index has ANY data :
| eventcount summarize=false index=* OR index=_*
Thanks for the reply. But I do not seem to have the varlog_input directory on my server:
[root@myhost etc]# pwd /apps/splunkforwarder/etc [root@myhost etc]# ls -l | grep varlog [root@myhost etc]# [root@myhost etc]# env | grep -i SPLUNK_HOME [root@myhost etc]#
Here is an update to my original post. Here are the locations of the inputs.conf and outputs.conf file I have on "myhost":
[root@myhost splunkforwarder]# pwd /apps/splunkforwarder [root@myhost splunkforwarder]# find . -name inputs.conf ./etc/system/default/inputs.conf ./etc/system/local/inputs.conf ./etc/apps/search/local/inputs.conf ./etc/apps/SplunkUniversalForwarder/default/inputs.conf ./etc/apps/introspection_generator_addon/default/inputs.conf ./etc/apps/splunk_httpinput/default/inputs.conf [root@myhost splunkforwarder]# find . -name outputs.conf ./etc/system/default/outputs.conf ./etc/apps/SplunkUniversalForwarder/default/outputs.conf ./etc/apps/fwd-2-dev-indexers/default/outputs.conf
Splunk configures index = default when you add new monitor. And default index is not created in indexer servers. So you need to specify index and sourcetype for your monitor. Edit /apps/splunkforwarder/etc/apps/search/local/inputs.conf and add index and sourcetype like below. Restart splunk forwarder and check data in
[monitor:///var/log/foo/] index = main sourcetype = foo
As @woodcock suggested. Instead of updating splunk internal search app it is better to put inputs.conf in your own add-on and deploy it. Move /apps/splunkforwarder/etc/apps/search/local/inputs.conf file to /apps/splunkforwarder/etc/apps/fwd-2-dev-indexers/default/ and restart splunk forwarder.