Getting Data In

What is the best way to ingest 300gb csv to splunk?

rajyah
Communicator

I tried ingesting it using add oneshot then midway through it, splunk suddenly stops.

Aside from splitting the file, is there a way to ingest this file without splunk stopping suddenly?

Can splunk monitor three 200gb+ files at the same time?

Thank you.

Regards,
Raj

HiroshiSatoh
Champion

Please tell me the structure of indexers.
The amount of log processed by one indexer per day is said to be 300GB.

0 Karma

HiroshiSatoh
Champion

What is the reason for using a heavy forwarder?

Splunk is not good at fetching large files at once with default settings.

The problem is that while importing a file, one pipeline monopolizes processing.

It is necessary not to concentrate on one indexer as well as the pipeline that detects and captures logs.

It is important to investigate which process is the problem.
https://wiki.splunk.com/Community:HowIndexingWorks
https://docs.splunk.com/Documentation/Splunk/7.2.5/Indexer/Pipelinesets
https://wiki.splunk.com/Community:Troubleshooting_Monitor_Inputs

rajyah
Communicator

Hi, thank you for input.

A cluster with three indexer.
I'm using a heavy forwarder to process the CSVs to be sent to the cluster. So is it not possible to ingest that large file?

Regards,
Raj

0 Karma
Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Dynamic formatting from XML events

This challenge was first posted on Slack #puzzles channelFor a previous puzzle, I needed a set of fixed-length ...

Enter the Agentic Era with Splunk AI Assistant for SPL 1.4

  🚀 Your data just got a serious AI upgrade — are you ready? Say hello to the Agentic Era with the ...

Stronger Security with Federated Search for S3, GCP SQL & Australian Threat ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...