I'm new to Splunk, and I need to feed some logging message/events into Splunk remotely using Splunk Python APIs.
This is a new platform using Linux, so there won't be any existing APP to use.
The message/events look like the following:
INFO 2014-12-29 20:37:54,611 getcustomertype 3010 getcustomer type....
INFO 2014-12-29 20:37:54,652 getcustomerid 2996 getcustomerid....
ERROR 2014-12-30 00:05:25,558 savecloudx_config 52 lookup bucket:cheng-bucket1 failed
I have launched a Splunck at a remote linux machine to collect data, and, from my platform, I would like to progarmatically
call Splunk Python APIs to connect to this remote Splunk, do some configuration, and be able to start calling whatever.submit
to Splunk so that Splunk can index the event/messages I submitted and be able to perform searching for those data.
I have already confirmed that I can access the remote Splunk using the following:
service = client.connect(host='10.88.0.99',port=8000,username='admin',password='123456')
My qustions are:
1. Is there any examples
2. What are the basic essential python APIsI need to call to do the essential configuration to start feeding event to Splunk? e.g. do I need to create a new index?
3. Is there any universal APP that can be used for this purpose?
I am new to Splunk and please forget me if I ask stupid questions.
Thanks a lot...
The Splunk API is used to manage Splunk and run searches, but not to submit data. There are far easier ways to do that.
Perhaps the easiest is to install the Splunk Universal Forwarder on your local Linux box (the one generating the logs). Tell the forwarder where the logs are and it will send them to Splunk for indexing.
Thanks Rich.. Is there any document that I can read for step-by-step instructions to to install the Splunk Universal Forwarder and make it work?
I briefly glanced through the document. Here is how I understood.. Please correct me if I am wrong
1. download Splunk forwarder to my own machine.
2. modify inputs.conf and specify which file to monitor,
With this, when a new message is added to this file, splunk forwarder will send it out.
But which one to use? There are three of them...
3. modify outputs.conf to specify where to send the new message/log collected.
5. Restart Splunk
1. which inputs.conf to modify? There are multiple of them.
2. For outputs.conf, what port number to use? Do I need to configure something in Splunk Enterprise server to listen to that port? What is the corresponding config on Splunk Enterprise server?
3. For inputs.conf, [monitor://etc/localgateway/command.log] will monitor this command.log file, and send newly added event to Splunk server. Can I do filtering, and only send certain message? Or my backend software can filter the message myself, and only put the necessary messages into a new file, and have the forwarder only monitor this new file so that I can control which event/line/message to send.
This is a lot of questions to ask, I really appreciate your helps.
I have added the following on my log-holding Linux machine:
_TCPROUTING = *
index = _internal
forwardedindex.0.whitelist = .*
forwardedindex.1.blacklist = .*
forwardedindex.2.whitelist = (audit|_introspection)
forwardedindex.filter.disable = false
On the Splunk Enterprise Server, I have added "Setting->Forwarding & Receiving->Configure Receiveing->add New
for port 9997.
I have checked the /opt/splunkforwarder/var/log/splunk/splunkd.log and I got:
12-31-2014 21:44:35.210 -0800 INFO TcpOutputProc - Initializing connection for non-ssl forwarding to 10.88.0.99:9997
12-31-2014 21:44:35.210 -0800 INFO TcpOutputProc - tcpout group indexer1 using Auto load balanced forwarding
12-31-2014 21:44:35.305 -0800 INFO TailingProcessor - Parsing configuration stanza: monitor:/etc/localgateway/commandlog.log.
12-31-2014 21:44:35.305 -0800 INFO TailingProcessor - Adding watch on path: /etc/localgateway/commandlog.log.
12-31-2014 21:44:35.313 -0800 INFO TcpOutputProc - Connected to idx=10.88.0.99:9997
12-31-2014 21:44:35.317 -0800 INFO WatchedFile - Will begin reading at offset=476993 for file='/opt/splunkforwarder/var/log/splunk/metrics.log'.
But I don't see "WatchedFile - Will begin reading at offset......for /etc/localgateway/commandlog.log".
On Splunk Enterprise, I don't see any log when I do "Search & Reporting->Data Summary".
The files to modify are $SPLUNKHOME/etc/system/local/inputs.conf and $SPLUNKHOME/etc/system/local/outputs.conf. Create them if necessary.
Do NOT use indexes that begin with _ - they are for Splunk to use.
The indexAndForward attribute does not apply to universal forwarders.
Universal forwarders do not filter - do that in the indexer.
You don't need the TCPROUTING attribute.
Try to keep things simple. Once you have data being indexed you can try to add controls.
You may need to specify the log file as
Verify you have your settings in the right files and restart the forwarder.
I have done what you said, and things are better, but still no data in Splunk Server.
I do see this in splunkd.log:
01-01-2015 09:12:31.451 -0800 INFO TailingProcessor - Parsing configuration stanza: monitor:///etc/localgateway/commandlog.log.
01-01-2015 09:12:31.451 -0800 INFO TailingProcessor - Adding watch on path: /etc/localgateway/commandlog.log.
01-01-2015 09:12:31.495 -0800 WARN DateParserVerbose - Failed to parse timestamp. Defaulting to timestamp of previous event (Wed Dec 31 21:51:21 2014). Context: FileClassifier /etc/localgateway/commandlog.log
01-01-2015 09:12:31.674 -0800 INFO TcpOutputProc - Connected to idx=10.88.0.99:9997
01-01-2015 09:13:01.184 -0800 WARN AuthenticationManagerSplunk - Seed file is not present. Defaulting to generic username/pass pair.
index = internal
TCPROUTING = *
index = _internal
_TCPROUTING = *
index = _internal
Any Idea... I really appreciate your helps. Thanks a lot..
Thanks a lot. Finally I can see the log on my Splunk Enterprise Server.
I simply deleted all other lines in the inputs and outputs.conf, and I only have:
This seems to do the magic. Thank you for the advice.