This kinda sucks. I don't think it has ever worked though. Just one more thing in splunk that doesn't work and isn't supported by them. So far splunk has been the biggest waste of money ever.
... View more
Hi jchidiac,
The online sandbox does not currently support the Java SDK. I will add this in to our future feature requests.
Thanks for your question,
Randy
... View more
I'm afraid you've misunderstood the question. Each forwarder has only one NIC. The receiver has two NICs, one on a 10/8 network and one on a 172.30/16 network. We don't want to bind the receiving splunkd to only one IP; it must receive from forwarders on both networks. The question is: how do we prevent forwarders (each of which is connected to only one network at a time) from attempting to send data out on both networks simultaneously? (Some forwarders can be manually switched from one network to the other, but they're still only connected to one network at a time.)
... View more
I guess he doesnot want mongodb data in splunk ... he wants to lookup values from MongoDB... that can be done via Python Script. no HUNK required.
... View more
the query i wrote was getting the results back.
Tried this query also. But not working.
Still getting the same result, f1,f2 in the file but not data under them.
... View more
Interesting. Because JSP executes code on the server side, it may be a bit tricky to retrieve Splunk data/ graphs using JSP unless you did one of the following using the REST API/ SDK:
1- Include the JavaScript SDK into your project to send data to Splunk and render data/ visualizations. Back in my classic ASP days, I would sometimes write JavaScript code using ASP using various string building techniques. The JavaScript SDK is my recommended way. The reason I recommend this way without knowing how much data you will be marshaling back and forth is that JavaScript is executed on the client side versus the server side; however, when it comes node.js, the lines can become blurry
2- Include the Java SDK into your JSP project so that your code is executed on the server. Added benefit is that you hide your secret sauce.
... View more
Here is a blog post with logman examples as well as links to other tools: http://blogs.msdn.com/b/oanapl/archive/2009/08/05/etw-event-tracing-for-windows-what-it-is-and-useful-tools.aspx
I think that a complete answer to this question should have samples that work with Splunk.
... View more
There are a few approaches you can take.
1- Try to do in Splunk what your other distributed system is doing; such search commands such as 'transaction' and others can follow an ID from start to finish
2- Run a search and export the results to JSON, CSV, raw text, etc. and import into your system
3- Take advantage of the SDK/API to pull the data out and send the data to wherever you want it to go (including processing)
Try option 1 fist. The search language provided by Splunk is quite rich and powerful.
Here are a few links:
SDK: http://dev.splunk.com/view/sdks/SP-CAAADP7
Create a save search and export via REST: http://docs.splunk.com/Documentation/Splunk/5.0.2/RESTAPI/RESTsearch
Identify and group events based upon transaction: http://docs.splunk.com/Documentation/Splunk/5.0.2/Search/Identifyandgroupeventsintotransactions
Hope this helps.
... View more
Have you taken a look at our Python SDK ?
You can use this to execute Splunk searches and integrate the results into your application and also send events from your Python app directly into Splunk. There is also a PHP SDK.
... View more
Thanks for the response.
The data is imported in iis-2 format. During search I use “extract auto=true” to get each field from the cs_uri_stem as these fields are not automatically captured when indexing.
If I alter the props config will it change all encoding in the cs_uri_stem?
There are two parameters in the cs_uri_stem I would not want to decode.
The eval function in search does work but I would like to do it at the indexing stage.
... View more
No WSDL , but as mentioned and linked to in previous replys , there are extensive REST endpoint docs.
Furthermore , our 6 language SDK's (java,javascript,python,ruby,c#,php) abstract the developer from having to really know the underlying semantics of the REST endpoints.
... View more
SoS unfortunatly does not show much else for our universal forwarder. The most I could conclude from SoS was that file descriptor and memory usage are within acceptable limits and that CPU usage is at 100%. It also provided me with the information, that the TailingProcessor was reporting errors which was a good starting point.
After using system profiling tools I found out however that out of 24 threads of splunkd running only one of them is constantly at 100%. Further digging led me to a stanza that was causing errors for the TailingProcessor. It was not the stanza itself but rather the data that was being monitored. Temporary files were being created then removed in a certain spool folder and it seems that this was causing problems for the UFW. After blacklisting the said spool folder load dropped to almost zero and it is staying there for now.
... View more
For Splunk 6.4.x:
Here is a list of different option for exporting to a file from the CLI
$SPLUNK_HOME/bin/splunk search 'index=main' -output table > tofile.txt
$SPLUNK_HOME/bin/splunk search 'index=main | head' -output raw > tofile.txt
$SPLUNK_HOME/bin/splunk search 'index=main | head' -output rawdata > tofile.txt
$SPLUNK_HOME/bin/splunk search '*' -output csv > tofile.txt
$SPLUNK_HOME/bin/splunk search 'index=main id=abs*' -output json > tofile.txt
The default behavior of the CLI search is to export first 100. Use the -maxout 0 option to bypass that limit.
$SPLUNK_HOME/bin/splunk search 'index=main id=abs*' -output json -maxout 0 > tofile.txt
If you don't specify an output option, the default is to only export _raw .
... View more