Good morning,
I come to you because after looking for an answer to my problem, my last solution is to come and seek help on the splunk forum.
Here is the context:
I have hundreds of messages with identical node parameters, only the parameter values change. example:
"jobs": dev
"position": 3
"city": NY
"name": Leo
.......
“jobs”: HR
"position": 4
“city”: CA
"name": Mike
........
The goal is that these hundreds of messages are sometimes truncated because their responses are too large, I would like to find a solution to display them in full?
I had thought about increasing the capacity in splink but this is not possible for my project and the truncated logs are -1% so a big change for few logs, not really good moves.
My second solution, I thought of making a regex which finds the truncated message grouped into several pieces, is this possible?
I also try some regex to find my message like this, but it not working
index="" | eval key="<value i want>" | table _raw
If not, maybe you have another idea ?
Thank you for your help and time.
Have a good evening
Hi @keorus,
I suppose that your events are truncated also at raw level.
In this case, you have to intervene in the input phase adding an option to your props.conf
[your_sourcetype]
TRUNCATE = 1000000
a reasonable value for the max lenght of your events.
You have to put this props.conf in the Indexers or (if present) in the first Heavy Forwarder that your dat pass through.
Ciao.
Giuseppe
Thanks for your message @gcusello
I just have a little issue, for now i can't touch the configuration of splunk, I have to handle with this configuration. It is a requirement of my team projects.
Would that mean that the only solution was to change the splunk configuration? and that there would not be another solution
Splunk has a limit on how long a single event can be. If an event is longer, it is truncated on ingestion which means that only first $LIMIT (by default it's 10000) characters are stored within Splunk's index. The rest of the event is irrevocably lost. So you can't display what isn't there - it's simply not saved in your Splunk. That's why @gcusello said you have to talk with your Splunk team about raising the limit for this particular sourcetype if this is an issue for your data.
Hi @keorus,
if the events are truncated also in raw visualization, this means that the logs are truncated at the ingestion and you canot do anything to solve the issue, only notice the problem to your colleagues that manage Splunk ingestion to change configuration.
Ciao.
Giuseppe