Community Blog
Get the latest updates on the Splunk Community, including member experiences, product education, events, and more!

How to Get Started with Splunk Data Management Pipeline Builders (Edge Processor & Ingest Processor)

adepp
Splunk Employee
Splunk Employee

If you want to gain full control over your growing data volumes, check out Splunk’s Data Management pipeline builders – Edge Processor and Ingest Processor. These pipeline builders are available to Splunk Cloud Platform customers and are included with your subscription.

What are Splunk’s Data Management Pipeline Builders?

Splunk’s Data Management Pipeline Builders are the latest innovation in data processing. They offer more efficient, flexible data transformation – helping you reduce noise, optimize costs, and gain visibility and control over your data in motion.

Splunk Data Management pipeline builders are offered with a choice of deployment model:

  • Edge Processor is a customer-hosted offering for greater control over data before it leaves your network boundaries. You can use it to filter, mask, and transform your data close to its source before routing the processed data to the environment of your choice. 
  • Ingest Processor is a Splunk-hosted SaaS offering ideal for customers who are all-in on cloud and prefer that Splunk manage the infrastructure for them. In addition to filtering, masking and transforming data, it enables a new capability - converting logs to metrics.

How to Get Started with Pipeline Builders 

If you’d like to request access to either Edge Processor or Ingest Processor, fill out this form to request activation.

If you already have access, you can navigate to the Data Management console in the following ways: 

  • Login to Splunk Cloud Platform, navigate to the Splunk Web UI homepage, and click Settings > Add data > Data Management Experience to start using the pipeline builders today. 
  • You can also directly navigate to Data Management using the following link: https://px.scs.splunk.com/<your Splunk cloud tenant name>

Review these Lantern articles before building your first pipeline:

Popular Use Cases to Get Started

If you’re ready to filter, mask, and transform your data before routing it to Splunk or Amazon S3, then it’s time to build a pipeline! Pipelines are SPL2 statements that specify what data to process, how to process it, and where to send it. Author pipelines using SPL2, use quick-start templates, and even preview your data before applying it. 

Once you’ve configured and deployed Edge Processor or Ingest Processor, you can build a pipeline to accomplish a number of use cases to help you control costs, gain additional insights, and optimize your overall data strategy. Check out the following key use cases to get started:

Security use cases

  • Reduce syslog firewall logs (PAN and Cisco) and route to Amazon S3 for low-cost storage (article)
  • Mask sensitive PII data from events for compliance (video)
  • Enrich data via real-time threat detection with KV Store lookups (article)

Observability use cases

  • Filter Kubernetes data over HTTP Event Collector (HEC) - Edge Processor only (video)
  • Reduce verbose Java app debug logs for faster incident detection (blog)
  • NEW with Ingest Processor: Convert logs to metrics to optimize monitoring, then route to Splunk Observability Cloud (article)

Explore more use cases in this comprehensive Lantern article. Here you’ll find additional use cases to filter and route data, as well as use cases to transform, mask, and route data.

Dive in and Unlock New Capabilities

Dive in with the resources below and unlock new capabilities with Federated Search for Amazon S3. Register for our upcoming events to learn more and get live help from the Data Management team, then review the additional resources to support your ongoing journey. 

Upcoming events you don’t want to miss

  • Ask the Experts: Community Office Hours | Sep 25, 2024 at 1pm PT: Ask questions and get help from technical experts on the Data Management team.
  • Tech Talk | Oct 24, 2024 at 11am PT: Dive deep into the capabilities of Splunk’s Pipeline Builders and see them in action.
  • Bi-weekly Webinar | every other Thursday at 9am PT (starting Oct 10, 2024): Topics will vary week-to-week and will cover everything you need to know about the pipeline builders, from how to get started to executing advanced use cases. 

Additional resources

Streamline Your Data Management Even More with Federated Search for Amazon S3 

adepp_1-1726598603461.png

After routing data to Amazon S3, you can leverage Federated Search for Amazon S3 for a unified experience to search data across Splunk Platform and Amazon S3. This solution is now generally available in Splunk Cloud Platform and can help you further optimize costs while managing compliance. 

We recommend using Federated Search for Amazon S3 for low-frequency ad-hoc searches of non-mission critical data that’s often stored in Amazon S3. Common use cases include running security investigations over historical data, performing statistical analysis over historical data, enriching existing data in Splunk with additional context from Amazon S3, and more.  

You've seen the benefits, you have the use cases, now it’s time to experience the magic of Splunk Data Management for yourself! 

Login to your Splunk Cloud Platform and navigate to Data Management Experience to start using the pipeline builders today! Request activation here.

Happy Splunking! 

The Splunk Data Management Team

Contributors
Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Dynamic formatting from XML events

This challenge was first posted on Slack #puzzles channelFor a previous puzzle, I needed a set of fixed-length ...

Enter the Agentic Era with Splunk AI Assistant for SPL 1.4

  &#x1f680; Your data just got a serious AI upgrade — are you ready? Say hello to the Agentic Era with the ...

Stronger Security with Federated Search for S3, GCP SQL & Australian Threat ...

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...