Splunk Search

messages truncate

keorus
New Member

Good morning,

I come to you because after looking for an answer to my problem, my last solution is to come and seek help on the splunk forum.


Here is the context:


I have hundreds of messages with identical node parameters, only the parameter values change. example:

"jobs": dev

"position": 3

"city": NY

"name": Leo

.......


“jobs”: HR

"position": 4

“city”: CA

"name": Mike

........

The goal is that these hundreds of messages are sometimes truncated because their responses are too large, I would like to find a solution to display them in full?

I had thought about increasing the capacity in splink but this is not possible for my project and the truncated logs are -1% so a big change for few logs, not really good moves.

My second solution, I thought of making a regex which finds the truncated message grouped into several pieces, is this possible?

 

I also try some regex to find my message like this, but it not working

index="" | eval key="<value i want>" | table _raw 


If not, maybe you have another idea ?

 

Thank you for your help and time.

Have a good evening

Labels (3)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @keorus,

I suppose that your events are truncated also at raw level.

In this case, you have to intervene in the input phase adding an option to your props.conf

[your_sourcetype]
TRUNCATE = 1000000

a reasonable value for the max lenght of your events.

You have to put this props.conf in the Indexers or (if present) in the first Heavy Forwarder that your dat pass through.

Ciao.

Giuseppe

0 Karma

keorus
New Member

Thanks for your message @gcusello 



I just have a little issue, for now i can't touch the configuration of splunk, I have to handle with this configuration. It is a requirement of my team projects.

Would that mean that the only solution was to change the splunk configuration? and that there would not be another solution

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Splunk has a limit on how long a single event can be. If an event is longer, it is truncated on ingestion which means that only first $LIMIT (by default it's 10000) characters are stored within Splunk's index. The rest of the event is irrevocably lost. So you can't display what isn't there - it's simply not saved in your Splunk. That's why @gcusello said you have to talk with your Splunk team about raising the limit for this particular sourcetype if this is an issue for your data.

gcusello
SplunkTrust
SplunkTrust

Hi @keorus,

if the events are truncated also in raw visualization, this means that the logs are truncated at the ingestion and you canot do anything to solve the issue, only notice the problem to your colleagues that manage Splunk ingestion to change configuration.

Ciao.

Giuseppe

Get Updates on the Splunk Community!

CX Day is Coming!

Customer Experience (CX) Day is on October 7th!! We're so excited to bring back another day full of wonderful ...

Strengthen Your Future: A Look Back at Splunk 10 Innovations and .conf25 Highlights!

The Big One: Splunk 10 is Here!  The moment many of you have been waiting for has arrived! We are thrilled to ...

Now Offering the AI Assistant Usage Dashboard in Cloud Monitoring Console

Today, we’re excited to announce the release of a brand new AI assistant usage dashboard in Cloud Monitoring ...