Getting Data In

Parsing appears to mangle part of a java stack trace

gregb
Explorer

I have an odd problem with some of my stack traces, which I have never seen before. It appears the delimiting punctuation gets stripped out of the trailing part of my stack traces. Anyone know why this might be happening?

The parsed and indexed event:

 at $Proxy27.service(Unknown Source)  
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
    at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)  
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)  
    at .javalang.reflect.Method.invoke(Unknown Source)  
    at ....()  
     ..orgspringframeworkexpressionspelsupportReflectiveMethodExecutorexecute:.ReflectiveMethodExecutorjava69at....()  
     ..orgspringframeworkexpressionspelastMethodReferencegetValueInternal:.MethodReferencejava83at....()
     ..orgspringframeworkexpressionspelastCompoundExpressiongetValueInternal:.CompoundExpressionjava57at....()  

Vs. the source:

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)  
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)  
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)  
    at java.lang.reflect.Method.invoke(Unknown Source)  
    at org.springframework.expression.spel.support.ReflectiveMethodExecutor.execute(ReflectiveMethodExecutor.java:69)  
    at org.springframework.expression.spel.ast.MethodReference.getValueInternal(MethodReference.java:83)  
    at org.springframework.expression.spel.ast.CompoundExpression.getValueInternal(CompoundExpression.java:57)

I am using the 4.3.4 indexer, 4.3.4 forwarder. My props.conf:

MAX_EVENTS=50000
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=true
MAX_TIMESTAMP_LOOKAHEAD=30
TIME_PREFIX=^\[
TRUNCATE=0
BREAK_ONLY_BEFORE=^\[\d{4}

Tags (2)

Damien_Dallimor
Ultra Champion

"...tying my application to the splunk server..." > you aren't.Logging appenders faciliate abstraction of the underlying log destination.Thats the point of log4j, logback, and higher level facades such as SLF4J.

"...If for any reason the splunk server was offline, I would lose log events....." > SplunkJavaLogging has in built fault tolerance if the Splunk Indexer is down.

"...I couldnt say SplunkJavaLogging would result in any different behavior...." > SplunkJavaLogging can format your stacktraces in a best practice semantic format vs the birds nest that is the standard printStackTrace output that you are currently having parsing issues with.

0 Karma

gregb
Explorer

Technically, you are. Granted the facades hide the tight coupling, your splunk server still becomes critical to your application infrastructure. While we have been exploring getting Splunk to become a first class citizen in our application technology stack, its still early in that conversation.

Also, in your summary, you do say "for whatever reason..., that a UF can not be deployed.In this case, Splunk Java Logging can be used to forward events to Splunk... SplunkLogEvent class to construct your log events in best practice semantic format."

I will definitely check out the SplunkLogEvent.

0 Karma

Damien_Dallimor
Ultra Champion

Not a direct answer to your question , but an alternative suggestion.

Have you taken a look at SplunkJavaLogging ?

It has custom Splunk logging appenders for Log4j and LogBack that will forward your events to Splunk via HTTP Rest or TCP, and the events will be formatted in best practice logging semantic.

It also makes it REALLY easy to Splunk Java stacktraces and handle the stace trace elements as multi value fields.

Have a look at the throwableExample() method in this code example.

gregb
Explorer

Thanks for the response. I am not a big fan of intimately
tying my application to the splunk server. If for any reason the splunk server was offline, I would lose log events.

The issue though, is that I have used this props.conf configuration with the corresponding log4j conversion pattern for a long time and this is the first time I have seen this. Whats more, considering this appears to be a parser issue (since the source is good), I couldnt say SplunkJavaLogging would result in any different behavior.

0 Karma
Get Updates on the Splunk Community!

Federated Search for Amazon S3 | Key Use Cases to Streamline Compliance Workflows

Modern business operations are supported by data compliance. As regulations evolve, organizations must ...

New Dates, New City: Save the Date for .conf25!

Wake up, babe! New .conf25 dates AND location just dropped!! That's right, this year, .conf25 is taking place ...

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud

Introduction to Splunk Observability Cloud - Building a Resilient Hybrid Cloud  In today’s fast-paced digital ...