Getting Data In

How to upload and index a text file containing more than 1500 lines without any line breaks?

mmohiuddin
Path Finder

Hi

I would like to upload a text file containing more than 1500 lines without any line breaks. How do I do this in Splunk?

Here is my props.conf

[sourcetype]
MAX_EVENTS = 2000
TRUNCATE = 99999
SHOULD_LINEMERGE = TRUE
DATETIME_CONFIG = CURRENT

Even after making these changes, I am unable to get the data properly indexed in Splunk. I am getting line breaks. How do I properly get the events properly indexed to get the entire 1500 + lines as one event?

Please let me know.

Thanks

somesoni2
Revered Legend

Try this. (props.conf on Indexer/Heavy Forwarder)

[sourcetype]
BREAK_ONLY_BEFORE = ^JUNKCHAR
DATETIME_CONFIG = CURRENT
MAX_EVENTS = 2000
NO_BINARY_CHECK = 1
SHOULD_LINEMERGE = true
TRUNCATE = 99999
0 Karma

mmohiuddin
Path Finder

Still I get only 704 lines indexed out of total of 1503 lines after applying the new props.conf on Indexer

0 Karma

edrivera3
Builder

I have this conf and it worked for me.

SHOULD_LINEMERGE = true
BREAK__ONLY_BEFORE = (ADFASDFA)
NO_BINARY_CHECK = true
MAX_EVENT = 2000
is_valid = true
disabled = false
pulldown_type = true

Maybe there are something that you just don't need.

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...