Getting Data In

What is the best architecture setup to handle multiple log sources and apply all data to specific field headers?

New Member

Hello - as you may see by my account status, I'm a complete newbie to Splunk.

I apologize for any confusion or use of incorrect terminology. Here's my questions as I've written them down along my journey.

What is the best architecture setup to handle multiple log sources and apply all data to specific field headers? I ask this knowing that I want to first define the fields I want. These fields will be able to cover the vast of incoming log sources, but I know some fields will be null, and that’s ok.

I need this data to be under its respective field at index time ( I think? ), the reason is because I need the data NOT only viewable in the Splunk Web but also need it defined/assigned correctly on the backend for further and separate processing.

Do I need multiple indexers? If so, how do I go about setting this up (step by step please)?
How do I connect a multi system Splunk architecture? tutorial reference should be fine.
How do I ensure specific data is assigned to specific field headers?

0 Karma

Splunk Employee
Splunk Employee

Hi bbyttn,

You can ingest many different types of logs on a single indexer, and the beauty of Splunk is that you don't need to define your fields ahead of time. Only a few basic fields such as source, sourcetype, host, and time are captured at indexed time, and all other fields can be extracted on-the-fly at search-time.

To understand index time vs. search time, please refer to documentation here:

This section gives you an overview of how data moves through Splunk deployments - the data pipeline:

The Search Tutorial may be a good way to get started with using Splunk.

Hope this helps. Thanks!

0 Karma

New Member

Hi Hunter,

Thank you for the response. I should clarify that I'm more-so a newbie with administration and data handling, rather than being a user of Splunk and its resources within the web interface.

I have a bit better grasp on how the architecture will be setup. I do have a question about data ingestion though. Can you point me in the right direction if I want to accomplish the following tasks:

  1. I want to receive data from a Cisco ASA FW for VPN logging purposes. This data comes in with different formats depending on the VPN log.

  2. I want to have each uniquely formatted VPN log indexed as a separate sourcetype. How is this possible and where do I start making config changes to accomplish this?

  3. Furthermore, I want to ingest more VPN data from another Cisco ASA FW but I need to tag this data in a way that it is separate from the first Cisco FW. I'd prefer to have it tagged with a custom ID, rather than differentiating by the 'host' or 'source' field. If this is done through specific config files, can you point out which ones and which locations?

Thank you!


0 Karma
Get Updates on the Splunk Community!

Splunk Forwarders and Forced Time Based Load Balancing

Splunk customers use universal forwarders to collect and send data to Splunk. A universal forwarder can send ...

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...