Getting Data In

What's the best practice to get AWS data into the Splunk platform at scale?

akornhauser_spl
Splunk Employee
Splunk Employee

What's the best practice to get AWS data, such as VPC Flow, CloudWatch, CloudTrail, into the Splunk platform at scale? The modular inputs in the Splunk Add-on for Amazon Web Services are not sufficient for the scale I need.

0 Karma
1 Solution

akornhauser_spl
Splunk Employee
Splunk Employee

The Splunk Product Best Practices team provided this response. Read more about How Crowdsourcing is Shaping the Future of Splunk Best Practices.

The best practice solution is to leverage Splunk AWS Project Trumpet to automate how you collect data for many of these popular data sources in your AWS account.

Splunk AWS Project Trumpet is an open-source tool provided by Splunk that allows you to select the data sources you want to collect, then specify the HEC token where Amazon Kinesis Data Firehose (KDF) should send the events to. Trumpet then deploys a CloudFormation template to create the appropriate AWS resources to begin streaming the events to Splunk HEC.

Although Splunk AWS Project Trumpet leverages Splunk-supported solutions such as the Amazon Kinesis Data Firehose to Splunk integration, it is not a Splunk-supported solution.

You can read more about Splunk AWS Project Trumpet in the blog Automating AWS Data Ingestion into Splunk on Splunk Blogs. You can find the utility itself and additional details at splunk-aws-project-trumpet on Github.

Here are some recommendations for how to implement Splunk AWS Project Trumpet:

  • If you are a Splunk Cloud customer, first file a ticket with Splunk Support to enable HEC for use with Kinesis Firehose. Splunk Cloud operations will provide you with a destination URL to send your KDF data.
  • If you are a Splunk customer running Splunk Enterprise on-prem, we recommend that you deploy a heavy forwarder in your AWS environment with HEC enabled. This enables KDF to forward the data to the heavy forwarder first, then to Splunk Enterprise as a data-collection component.
  • If you are running the Splunk platform in your own AWS environment, deploy a HEC token to all your indexers, and place a classic load balancer in front of the indexers to distribute the KDF traffic across all indexers.

Add-on requirements:

The following video provides a visual walk through of deploying Splunk AWS Project Trumpet in your own environment.
Automating AWS Data Ingestion into Splunk with Project Trumpet

View solution in original post

akornhauser_spl
Splunk Employee
Splunk Employee

The Splunk Product Best Practices team provided this response. Read more about How Crowdsourcing is Shaping the Future of Splunk Best Practices.

The best practice solution is to leverage Splunk AWS Project Trumpet to automate how you collect data for many of these popular data sources in your AWS account.

Splunk AWS Project Trumpet is an open-source tool provided by Splunk that allows you to select the data sources you want to collect, then specify the HEC token where Amazon Kinesis Data Firehose (KDF) should send the events to. Trumpet then deploys a CloudFormation template to create the appropriate AWS resources to begin streaming the events to Splunk HEC.

Although Splunk AWS Project Trumpet leverages Splunk-supported solutions such as the Amazon Kinesis Data Firehose to Splunk integration, it is not a Splunk-supported solution.

You can read more about Splunk AWS Project Trumpet in the blog Automating AWS Data Ingestion into Splunk on Splunk Blogs. You can find the utility itself and additional details at splunk-aws-project-trumpet on Github.

Here are some recommendations for how to implement Splunk AWS Project Trumpet:

  • If you are a Splunk Cloud customer, first file a ticket with Splunk Support to enable HEC for use with Kinesis Firehose. Splunk Cloud operations will provide you with a destination URL to send your KDF data.
  • If you are a Splunk customer running Splunk Enterprise on-prem, we recommend that you deploy a heavy forwarder in your AWS environment with HEC enabled. This enables KDF to forward the data to the heavy forwarder first, then to Splunk Enterprise as a data-collection component.
  • If you are running the Splunk platform in your own AWS environment, deploy a HEC token to all your indexers, and place a classic load balancer in front of the indexers to distribute the KDF traffic across all indexers.

Add-on requirements:

The following video provides a visual walk through of deploying Splunk AWS Project Trumpet in your own environment.
Automating AWS Data Ingestion into Splunk with Project Trumpet

mtranchita
Communicator

Just to clarify; am I reading it correctly that the Splunk Product best practice for getting AWS data into Splunk is not a Splunk-supported solution?

0 Karma

jacobpevans
Motivator

No, Splunk created AWS Project Trumpet. The line about Amazon Kinesis isn't supported by Splunk because it's supported by AWS.

License

Cheers,
Jacob

If you feel this response answered your question, please do not forget to mark it as such. If it did not, but you do have the answer, feel free to answer your own post and accept that as the answer.
0 Karma

jwinders
New Member

https://github.com/splunk/splunk-aws-project-trumpet#support

"""

Support

Trumpet is currently maintained by nstonesplunk. This is not a Splunk supported solution.

"""


 

0 Karma
Get Updates on the Splunk Community!

Stay Connected: Your Guide to November Tech Talks, Office Hours, and Webinars!

🍂 Fall into November with a fresh lineup of Community Office Hours, Tech Talks, and Webinars we’ve ...

Transform your security operations with Splunk Enterprise Security

Hi Splunk Community, Splunk Platform has set a great foundation for your security operations. With the ...

Splunk Admins and App Developers | Earn a $35 gift card!

Splunk, in collaboration with ESG (Enterprise Strategy Group) by TechTarget, is excited to announce a ...