Splunk Search

Compiling stats for netstat output

tleyden
Explorer

Is it possible to take raw netstat input like this:

Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name
tcp        0      0 10.181.112.50:34656     10.157.88.10:11210      ESTABLISHED 1001       121024     6925/sync_gateway
tcp        0      0 10.181.112.50:38528     10.109.187.75:11210     TIME_WAIT   1001       121039     6925/sync_gateway
tcp        0      0 10.181.112.50:39648     10.109.176.116:11210    ESTABLISHED 1001       121056     6925/sync_gateway
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name
tcp        0      0 10.181.112.50:34656     10.157.88.10:11210      ESTABLISHED 1001       121024     6925/sync_gateway
tcp        0      0 10.181.112.50:38528     10.109.187.75:11210     TIME_WAIT   1001       121039     6925/sync_gateway
tcp        0      0 10.181.112.50:39648     10.109.176.116:11210    TIME_WAIT   1001       121056     6925/sync_gateway

and for each "reading" (separated by the Proto Recv-Q header) compute stats like:

Reading 1

ESTABLISHED: 2
TIME_WAIT: 1

Reading 2

ESTABLISHED: 1
TIME_WAIT: 2

If it would make it easier to put each netstat reading into it's own file, that would work too.

Tags (1)

woodcock
Esteemed Legend

I am assuming that each "reading" is a separate event. If so, you need the multikv command and this should work:

... | streamstats current=t count AS reading | multikv | stats count by reading State
Get Updates on the Splunk Community!

Data Management Digest – December 2025

Welcome to the December edition of Data Management Digest! As we continue our journey of data innovation, the ...

Index This | What is broken 80% of the time by February?

December 2025 Edition   Hayyy Splunk Education Enthusiasts and the Eternally Curious!    We’re back with this ...

Unlock Faster Time-to-Value on Edge and Ingest Processor with New SPL2 Pipeline ...

Hello Splunk Community,   We're thrilled to share an exciting update that will help you manage your data more ...