Deployment Architecture

Can a Splunk deployment configured in AWS cloud (IaaS) read and write to both S3 buckets and EBS?

andrewtrobec
Motivator

Hello,

I am a bit confused as to how Splunk manages its indexes through AWS cloud services, and I am not sure whether both EBS and S3 services are interchangeable for thsi type of deployment.  For example, is S3 only for archiving frozen buckets, or can it be used for hot/warm/cold buckets as well?

Is there some documentation about best practices here?  Compare and contrast?

Thanks!

Andrew

 

Labels (2)
Tags (5)
0 Karma
1 Solution

aasabatini
Motivator

Hi @andrewtrobec 

Splunk on aws manage the buckets always in the same way if you configured the storage like a file system.

If you want use S3 bucket with splunk you need to use the smartstore function.

docs.splunk.com/Documentation/Splunk/8.2.3/Indexer/AboutSmartStore

anyway you have to consider the sinzing of your log and how many day you need the data on line.

I hope I’ve been clear

“The answer is out there, Neo, and it’s looking for you, and it will find you if you want it to.”

View solution in original post

aasabatini
Motivator

Hi @andrewtrobec 

Splunk on aws manage the buckets always in the same way if you configured the storage like a file system.

If you want use S3 bucket with splunk you need to use the smartstore function.

docs.splunk.com/Documentation/Splunk/8.2.3/Indexer/AboutSmartStore

anyway you have to consider the sinzing of your log and how many day you need the data on line.

I hope I’ve been clear

“The answer is out there, Neo, and it’s looking for you, and it will find you if you want it to.”

isoutamo
SplunkTrust
SplunkTrust

Hi

Usually basic splunk core installation is using "local" storage for all indexes. That local storage can be an EC2 instance storage or EBS. Chose between instance and EBS based on your way to use nodes. Instance storage is lost when instance is terminated, EBS didn't go away, and you could use it again with a new EC2 instance.

In "normal" splunk instance you can move frozen buckets to S3 for archive. S3 can be a traditional S3 or a Glacier or you can even define automated migration from S3 to Glacier based on e.g. time.

When You are using  Splunk's Smartstore, then there is fundamental difference how instance is used storage. In SmartStore only hot data is in instance storage (only). All other events are stored to S3 (automatically) remote volumes. Partially that data is stored also on local instances cache partition (same where hot buckets are). With SmartStore it's crucial to use instances which have enough NVMe based local storage for performance point of view.

In shortly said EBS and S3 are not interchangeable. There are separate usage profile for both.

r. Ismo

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...