Splunk Search

messages truncate

keorus
New Member

Good morning,

I come to you because after looking for an answer to my problem, my last solution is to come and seek help on the splunk forum.


Here is the context:


I have hundreds of messages with identical node parameters, only the parameter values change. example:

"jobs": dev

"position": 3

"city": NY

"name": Leo

.......


“jobs”: HR

"position": 4

“city”: CA

"name": Mike

........

The goal is that these hundreds of messages are sometimes truncated because their responses are too large, I would like to find a solution to display them in full?

I had thought about increasing the capacity in splink but this is not possible for my project and the truncated logs are -1% so a big change for few logs, not really good moves.

My second solution, I thought of making a regex which finds the truncated message grouped into several pieces, is this possible?

 

I also try some regex to find my message like this, but it not working

index="" | eval key="<value i want>" | table _raw 


If not, maybe you have another idea ?

 

Thank you for your help and time.

Have a good evening

Labels (3)
0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @keorus,

I suppose that your events are truncated also at raw level.

In this case, you have to intervene in the input phase adding an option to your props.conf

[your_sourcetype]
TRUNCATE = 1000000

a reasonable value for the max lenght of your events.

You have to put this props.conf in the Indexers or (if present) in the first Heavy Forwarder that your dat pass through.

Ciao.

Giuseppe

0 Karma

keorus
New Member

Thanks for your message @gcusello 



I just have a little issue, for now i can't touch the configuration of splunk, I have to handle with this configuration. It is a requirement of my team projects.

Would that mean that the only solution was to change the splunk configuration? and that there would not be another solution

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Splunk has a limit on how long a single event can be. If an event is longer, it is truncated on ingestion which means that only first $LIMIT (by default it's 10000) characters are stored within Splunk's index. The rest of the event is irrevocably lost. So you can't display what isn't there - it's simply not saved in your Splunk. That's why @gcusello said you have to talk with your Splunk team about raising the limit for this particular sourcetype if this is an issue for your data.

gcusello
SplunkTrust
SplunkTrust

Hi @keorus,

if the events are truncated also in raw visualization, this means that the logs are truncated at the ingestion and you canot do anything to solve the issue, only notice the problem to your colleagues that manage Splunk ingestion to change configuration.

Ciao.

Giuseppe

Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Network to App: Observability Unlocked [May & June Series]

In today’s digital landscape, your environment is no longer confined to the data center. It spans complex ...

SPL2 Deep Dives, AppDynamics Integrations, SAML Made Simple and Much More on Splunk ...

Splunk Lantern is Splunk’s customer success center that provides practical guidance from Splunk experts on key ...

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...