New to splunk. We have a clustered environment with 100 of serveres involved. Without installing universal forwarder how to monitor the logs from those servers. We dont want to install plugin or anything all 100 servers.
We have contoll on all the servers so what would be the best way.
Any ideas or topics to cover will help
Thanks.
Concluding the above,
We have contoll on all the servers
). why can't install universal forwarderI would suggest the best way would be,
Assuming in-case installing forwarder manual is the issue then use some continuous deployment, if you dont want to go with licensed one then create a script to do the work.
We have contoll on all the servers
)steps to start exploring would be,
Sample scripts,
https://answers.splunk.com/answers/34896/simple-installation-script-for-universal-forwarder.html
Posting clear would help others to help you further.
In the spirit of answering the question asked...
You could, install a dedicated forwarder, and then write scripts on that forwarder to copy (ftp/smb/cifs/ssh) the log files off the 'hundreds' of source servers onto the forwarder (perhaps into a directory for each host?) Your inputs file would then have to have an entry for each log type and host name to allow you to override the host name using props/transforms.conf.
You will then need a seperate process to remove the indexed log files from your forwarder.
None of what I have suggested is sensible and the previous answers/comments are all superiour solutions, but if you really are playing with one hand tied behind your back, sometimes "sensible is off the table"
Concluding the above,
We have contoll on all the servers
). why can't install universal forwarderI would suggest the best way would be,
Assuming in-case installing forwarder manual is the issue then use some continuous deployment, if you dont want to go with licensed one then create a script to do the work.
We have contoll on all the servers
)steps to start exploring would be,
Sample scripts,
https://answers.splunk.com/answers/34896/simple-installation-script-for-universal-forwarder.html
Posting clear would help others to help you further.
Thanks. We were planning to increase the servers and dont want to do the manual installs. I should have aksked directly. Thanks much!!
To read a physical log file on your servers you need a program on your servers - that'd be the Universal Forwarder... but you said it's somehow impossible to use for you. Instead of writing log files to disk, your applications could send the logs directly to Splunk via the HEC. That way you wouldn't need to install anything on your servers as per your requirement.
A good alternative can be to let your applications send the logs to Splunk via the HTTP Event Collector: http://docs.splunk.com/Documentation/Splunk/6.5.2/Data/UsetheHTTPEventCollector
Its a physical log file . how the event collector will help?
The best way would be the Universal Forwarder. What's your reason for not wanting to use UFs?
Thanks for reply. Installing universal forwarder in all server is not possible with my case. We looking remote or alternative feature.
UF, HEC, Syslog, batch file upload, rest call, rsync, bash script....the options are limitless.
What kind of servers/apps you running. what kind of data are you collecting? Might help us workaround your no uf requirement by suggesting what is possible with the source machines/apps