Getting Data In

Would it be any issues for Splunk indexer/UF to handle this large size of TRUNCATE value or event size?

SplunkDash
Motivator

Hello,

I have a source file with a very large event size as I require to use TRUNCATE=1000000 in my props. Do you think....it would be any issue for SPLUNK indexer/UF to handle this large size of TRUNCATE value or event size? Are there any other alternatives if there are any issues? Any recommendation would be highly appreciated. Thank you!

Labels (1)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

Not really. It could help a bit if you had indexed fields and you created and pushed them explicitly with HEC instead of having the HF/indexer parse them out but other than that there should be not much of a real difference. The search-time extractions are just that - performed at search time so they are performed at every time you search through your events.

View solution in original post

PickleRick
SplunkTrust
SplunkTrust

I see at least two possible issues.

One is simply with displaying such bug events in UI. You might have performance problems rendering lists of events.

Second one is with indexing. You would either have big fields which would pump your index files up or have relatively small fields which would probably make Splunk have to go over many events just to parse and discard them from search which means very inefficient searching.

Remember that Splunk breaks your event into terms separated by delimiters and indexes those terms. Then if you're searching for key=value, Splunk checks for the value in its indexes and then if the index points to a specific event Splunk parses that event to see whether the location of the value within the event's content matches the field "key" definition. This process gets less efficient the more your search term occurs outside of your searched field. And with huge events I suppose the probability of this increases.

0 Karma

SplunkDash
Motivator

Hello, 

Thank you so much for your quick reply and it makes sense to me. Do you think using REST API (instead of UF/HF pushes) in this ingestion would optimize the process? Thank you again.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Not really. It could help a bit if you had indexed fields and you created and pushed them explicitly with HEC instead of having the HF/indexer parse them out but other than that there should be not much of a real difference. The search-time extractions are just that - performed at search time so they are performed at every time you search through your events.

Get Updates on the Splunk Community!

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

Get Inspired! We’ve Got Validation that Your Hard Work is Paying Off

We love our Splunk Community and want you to feel inspired by all your hard work! Eric Fusilero, our VP of ...

What's New in Splunk Enterprise 9.4: Features to Power Your Digital Resilience

Hey Splunky People! We are excited to share the latest updates in Splunk Enterprise 9.4. In this release we ...