Getting Data In

Would it be any issues for Splunk indexer/UF to handle this large size of TRUNCATE value or event size?

SplunkDash
Motivator

Hello,

I have a source file with a very large event size as I require to use TRUNCATE=1000000 in my props. Do you think....it would be any issue for SPLUNK indexer/UF to handle this large size of TRUNCATE value or event size? Are there any other alternatives if there are any issues? Any recommendation would be highly appreciated. Thank you!

Labels (1)
0 Karma
1 Solution

PickleRick
SplunkTrust
SplunkTrust

Not really. It could help a bit if you had indexed fields and you created and pushed them explicitly with HEC instead of having the HF/indexer parse them out but other than that there should be not much of a real difference. The search-time extractions are just that - performed at search time so they are performed at every time you search through your events.

View solution in original post

PickleRick
SplunkTrust
SplunkTrust

I see at least two possible issues.

One is simply with displaying such bug events in UI. You might have performance problems rendering lists of events.

Second one is with indexing. You would either have big fields which would pump your index files up or have relatively small fields which would probably make Splunk have to go over many events just to parse and discard them from search which means very inefficient searching.

Remember that Splunk breaks your event into terms separated by delimiters and indexes those terms. Then if you're searching for key=value, Splunk checks for the value in its indexes and then if the index points to a specific event Splunk parses that event to see whether the location of the value within the event's content matches the field "key" definition. This process gets less efficient the more your search term occurs outside of your searched field. And with huge events I suppose the probability of this increases.

0 Karma

SplunkDash
Motivator

Hello, 

Thank you so much for your quick reply and it makes sense to me. Do you think using REST API (instead of UF/HF pushes) in this ingestion would optimize the process? Thank you again.

0 Karma

PickleRick
SplunkTrust
SplunkTrust

Not really. It could help a bit if you had indexed fields and you created and pushed them explicitly with HEC instead of having the HF/indexer parse them out but other than that there should be not much of a real difference. The search-time extractions are just that - performed at search time so they are performed at every time you search through your events.

Get Updates on the Splunk Community!

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...

Let’s Get You Certified – Vegas-Style at .conf24

Are you ready to level up your Splunk game? Then, let’s get you certified live at .conf24 – our annual user ...