Developing for Splunk Enterprise

Java stack trace

Explorer

Hi

I am quite new to Splunk. I need some help in configuring Splunk light forwarder to receive and forward Java stack trace in one event type and not multi line events.

How would I go about doing this?

I use udp port and log4j. Must I define a new data type? the start of the event always start with - "Exception caught handling request:" and end with "at java.lang.Thread.run(Thread.java:662)"

The event is sometimes 500 lines long.

Tags (3)
0 Karma

Explorer

(Not a timely response, but hopefully someone will find this useful). The way I handle stack traces is pretty much the same way I handle all my enterprise application parsing/indexing configurations. My approach is to configure the props.conf so that line breaks on a custom regex. Because I use a standard format I drop in some variant of the following:

MAX_EVENTS=50000
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=true
MAX_TIMESTAMP_LOOKAHEAD=30
TIME_PREFIX=^\[
TRUNCATE=0
BREAK_ONLY_BEFORE=^\[\d{4}

Essentially, telling the parser, where to find the timestamp, not to ever truncate a log event and always merge until you find the start of the next header. This will pretty much take care of any multi-line log event, including a stack trace. I surround the timestamp with '[' ']' as to minimize a false match for BREAK_ONLY_BEFORE=^\[\d{4}

Since I use third party loggers like log4j/logback/log4net, which provide convenient log message pre-amble/header, I always try to standardize on:

"[%d{yyyy-MM-dd HH:mm:ss.SSS z}] [%p] [%t] [%x] [%c] %m%n"

This translates to:

"[timestamp] [log level] [thread id] [NDC] The log message"

So as long as the new line starts with "[%d{yyyy", break the event.

One concern I have with this approach is the documentation says SHOULD_LINEMERGE has a performance impact, but I dont really see any other alternatives. Hope this helps.

Ultra Champion

I have another suggestion for you.

Have you taken a look at SplunkJavaLogging ?

It has custom Splunk logging appenders for Log4j and LogBack that will forward your events to Splunk via HTTP Rest or TCP, and the events will be formatted in best practice logging semantic.

It also makes it REALLY easy to Splunk Java stacktraces and handle the stace trace elements as multi value fields.

Have a look at the throwableExample() method in this code example.

Legend

I think there may already be an answer here:

Merge multiline Java stack trace via syslog

The log4j sourcetype expects the format that is normally used in log files. If you are using syslog, syslog will have changed the format slightly and log4j would not be appropriate.

Note that Gerald's "java-over-syslog" settings belong in a props.conf file on the indexer. It is the indexer, not the light forwarder, that does the parsing.

Is there any way that you can use a forwarder to collect this data stream from a file instead of syslog?

0 Karma

Explorer

HI lguinn

I have set the props.conf but I still get the same result. This makes me believe that my forwarder is not forwarding the data correctly to the indexer I think. (We do not use the file as this will put strain on the host.)

Does the forwarder only send data and not change or add anything like dates or timestamps? (How can I see what the forwarder gets and send?)

0 Karma