Getting Data In

What is the best way to ingest 300gb csv to splunk?

rajyah
Communicator

I tried ingesting it using add oneshot then midway through it, splunk suddenly stops.

Aside from splitting the file, is there a way to ingest this file without splunk stopping suddenly?

Can splunk monitor three 200gb+ files at the same time?

Thank you.

Regards,
Raj

HiroshiSatoh
Champion

Please tell me the structure of indexers.
The amount of log processed by one indexer per day is said to be 300GB.

0 Karma

HiroshiSatoh
Champion

What is the reason for using a heavy forwarder?

Splunk is not good at fetching large files at once with default settings.

The problem is that while importing a file, one pipeline monopolizes processing.

It is necessary not to concentrate on one indexer as well as the pipeline that detects and captures logs.

It is important to investigate which process is the problem.
https://wiki.splunk.com/Community:HowIndexingWorks
https://docs.splunk.com/Documentation/Splunk/7.2.5/Indexer/Pipelinesets
https://wiki.splunk.com/Community:Troubleshooting_Monitor_Inputs

rajyah
Communicator

Hi, thank you for input.

A cluster with three indexer.
I'm using a heavy forwarder to process the CSVs to be sent to the cluster. So is it not possible to ingest that large file?

Regards,
Raj

0 Karma
Get Updates on the Splunk Community!

Modernize your Splunk Apps – Introducing Python 3.13 in Splunk

We are excited to announce that the upcoming releases of Splunk Enterprise 10.2.x and Splunk Cloud Platform ...

New Release | Splunk Cloud Platform 10.1.2507

Hello Splunk Community!We are thrilled to announce the General Availability of Splunk Cloud Platform 10.1.2507 ...

🌟 From Audit Chaos to Clarity: Welcoming Audit Trail v2

🗣 You Spoke, We Listened  Audit Trail v2 wasn’t written in isolation—it was shaped by your voices.  In ...