Getting Data In

How to parse and index thread dump in Splunk?

Path Finder

I have custom Thread Dump data and I want to index them in Splunk. So far I have applied almost all methods to index and parse them but I'm not able to get meaningful information from them.

Below is the sample from it.

Total threads: 434 on 2018-04-29T17-00-03+0800

Name        CPU Time (ms)       User Time (ms)      Id      State
IPC Parameter Sending Thread #868       0       0       17782       TIMED_WAITING
    waiting on java.util.concurrent.SynchronousQueue$TransferStack@4d974d1c at sun.misc.Unsafe.park(Native Method)
    at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
    at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
    at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
    at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
    at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
IPC Client (2006835220) connection to abcd.com/10.21.216.71:8020 from hdfs      0       0       17781       TIMED_WAITING
    waiting on org.apache.hadoop.ipc.Client$Connection@5be22c7a at java.lang.Object.wait(Native Method)
    at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:931)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:976)
1306724746@qtp-695248316-10273      29      10      17780       TIMED_WAITING
    waiting on org.mortbay.thread.QueuedThreadPool$PoolThread@4de3098a  at java.lang.Object.wait(Native Method)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
2125946201@qtp-695248316-10271      40      20      17776       TIMED_WAITING
    waiting on org.mortbay.thread.QueuedThreadPool$PoolThread@7eb75d59  at java.lang.Object.wait(Native Method)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
1805988475@qtp-695248316-10269      133     70      17772       TIMED_WAITING
    waiting on org.mortbay.thread.QueuedThreadPool$PoolThread@6ba5327b  at java.lang.Object.wait(Native Method)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
402769840@qtp-695248316-10266       161     90      17766       TIMED_WAITING
    waiting on org.mortbay.thread.QueuedThreadPool$PoolThread@1801c7b0  at java.lang.Object.wait(Native Method)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)
FastExecutor-3-60       1       0       17747       WAITING
    waiting on java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@476f266c   at sun.misc.Unsafe.park(Native Method)
    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)
    at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1067)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at com.xyz.shadow.io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:145)
    at java.lang.Thread.run(Thread.java:745)

How can I get them in a Single Event?

Till now most helpful was the "tsv" structured parsing, as It was able to get Name, CPU Time, ID, State but what i'm not getting is the Stack Trace. it is going in next line and in "CPU Time" field. Is there any way I can get them all in single event?

If it is not being parsed also that is fine. But it should come in Single Event.

0 Karma
1 Solution

Ultra Champion

To get it properly split by thread, something like this should work in props.conf (on your indexer(s) or Heavy Forwarder if you use those). This assumes each trace ends with .java:xxx) and in you sample each line has a leading space, if that is not in your actual data, remove the single \s from the regex.

LINE_BREAKER = \.java:\d+\)([\r\n]+)\s\S+
SHOULD_LINEMERGE = false
TRUNCATE = 0

This would split it like this:
Event 1:

 Total threads: 434 on 2018-04-29T17-00-03+0800

 Name        CPU Time (ms)        User Time (ms)        Id        State
 IPC Parameter Sending Thread #868        0        0        17782        TIMED_WAITING
     waiting on java.util.concurrent.SynchronousQueue$TransferStack@4d974d1c    at sun.misc.Unsafe.park(Native Method)
     at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
     at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
     at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
     at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
     at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
     at java.lang.Thread.run(Thread.java:745)

Event 2:

 IPC Client (2006835220) connection to abcd.com/10.21.216.71:8020 from hdfs        0        0        17781        TIMED_WAITING
     waiting on org.apache.hadoop.ipc.Client$Connection@5be22c7a    at java.lang.Object.wait(Native Method)
     at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:931)
     at org.apache.hadoop.ipc.Client$Connection.run(Client.java:976)

Event 3:

 1306724746@qtp-695248316-10273        29        10        17780        TIMED_WAITING
     waiting on org.mortbay.thread.QueuedThreadPool$PoolThread@4de3098a    at java.lang.Object.wait(Native Method)
     at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)

etc.

And then you'll need to define some extractions, like for example the following.

props.conf (on your search head(s))

EXTRACT-thread-dump-headers = ^(?<Name>.*?)\s+(?<CPUTime>\d+)\s+(?<UserTime>\d+)\s+(?<Id>\d+)\s+(?<State>[^\r\n]+)[\r\n]+(?<Trace>.*)

View solution in original post

Ultra Champion

To get it properly split by thread, something like this should work in props.conf (on your indexer(s) or Heavy Forwarder if you use those). This assumes each trace ends with .java:xxx) and in you sample each line has a leading space, if that is not in your actual data, remove the single \s from the regex.

LINE_BREAKER = \.java:\d+\)([\r\n]+)\s\S+
SHOULD_LINEMERGE = false
TRUNCATE = 0

This would split it like this:
Event 1:

 Total threads: 434 on 2018-04-29T17-00-03+0800

 Name        CPU Time (ms)        User Time (ms)        Id        State
 IPC Parameter Sending Thread #868        0        0        17782        TIMED_WAITING
     waiting on java.util.concurrent.SynchronousQueue$TransferStack@4d974d1c    at sun.misc.Unsafe.park(Native Method)
     at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
     at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:460)
     at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:362)
     at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:941)
     at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1066)
     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1127)
     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
     at java.lang.Thread.run(Thread.java:745)

Event 2:

 IPC Client (2006835220) connection to abcd.com/10.21.216.71:8020 from hdfs        0        0        17781        TIMED_WAITING
     waiting on org.apache.hadoop.ipc.Client$Connection@5be22c7a    at java.lang.Object.wait(Native Method)
     at org.apache.hadoop.ipc.Client$Connection.waitForWork(Client.java:931)
     at org.apache.hadoop.ipc.Client$Connection.run(Client.java:976)

Event 3:

 1306724746@qtp-695248316-10273        29        10        17780        TIMED_WAITING
     waiting on org.mortbay.thread.QueuedThreadPool$PoolThread@4de3098a    at java.lang.Object.wait(Native Method)
     at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:626)

etc.

And then you'll need to define some extractions, like for example the following.

props.conf (on your search head(s))

EXTRACT-thread-dump-headers = ^(?<Name>.*?)\s+(?<CPUTime>\d+)\s+(?<UserTime>\d+)\s+(?<Id>\d+)\s+(?<State>[^\r\n]+)[\r\n]+(?<Trace>.*)

View solution in original post

Path Finder

Hi @FrankVl,

Thanks so much for your quick response. I really appreciate it.

I tried your method and logically that's exactly what I want.

But the props.conf is not working. It is breaking the lines unevenly. Here I suspect that I need to tweak the regex.

Do you have any alternate regex OR any platform where I can test and build regex's for line breaking?

0 Karma

Ultra Champion

Can you show with a screenshot or so what you mean with breaking unevenly?

you can use regex101.com for testing regexes. Your sample data, with my regex gives me this result: https://regex101.com/r/HCzl1A/1

Which to me looks like it is correctly finding the event boundaries. But perhaps there is a subtle difference between what Splunk is reading and how you pasted the sample data here?

0 Karma

Path Finder

PFB link to the screenshot from Splunk. These are the raw events after line breaking.

https://answers.splunk.com/storage/temp/251822-2018-05-30-20-40-30-root.png

0 Karma

Ultra Champion

Your sample data in your question contained a Space before each thread name and multiple spaces to indent the stack trace.

I think in reality your data doesn’t not contain space before the thread name and uses a Tab to indent the trace lines?

Try removing the \s from the regex.

0 Karma

Path Finder

Yup That works Perfectly Fine. I tried removing \s from the GUI but at that time it was not working. But when I changed it in props.conf file and restarted the splunk it started breaking the events perfectly.

Thanks so much @FrankVI.

0 Karma

SplunkTrust
SplunkTrust

What are your props.conf settings for that sourcetype?

---
If this reply helps you, an upvote would be appreciated.
0 Karma