Getting Data In

How do I ingest logs from a separate set of Splunk Enterprise Servers?

vanderaj2
Path Finder

Hi Splunkers!

I have a Splunk distributed deployment.

One of my customers has a separate Splunk distributed deployment on the same network as mine. All of their Splunk Enterprise servers run on Linux.

This customer has recently asked me to ingest basic Linux security logs (things like /var/log/secure) from all of their Splunk Enterprise servers into my Splunk indexers. This is to maintain audit compliance.

Since these servers are full-blown Splunk Enterprise servers, and not just Universal Forwarders, I'm having some trouble getting logs to come over to my side correctly. They are already indexing Linux security logs on their side via a local inputs.conf in the Splunk TA for NIX.

To get the same logs to my indexers, I'm not sure if this is a matter of putting an outputs.conf on their servers (or modifying an existing outputs.conf on their servers) to also point to my indexers, and then adding a _TCP_ROUTING entry in their existing inputs.conf (the one in the TA for NIX).

Also, their linux logs index has a different name than mine. Do I need to set up something on my side (props? transforms?) to do the index translation when the logs arrive on my side?

Thank you in advance!

Tags (1)
0 Karma

klopez30
Explorer

Are those indexers being EOL? If not, why not just have your search heads querying them as additional peer nodes. Then you don't have to worry about logs moving, but you still get the results you need.

0 Karma

jcrabb_splunk
Splunk Employee
Splunk Employee

Although this doc is specifically for Heavy Weight Forwarders, this should still work on an indexer:

http://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd#Forwa...

A very basic example would be something like:

#################
# outputs.conf #
#################

[tcpout]
defaultGroup = myindexers

[tcpout:myindexers]
server = indexer01:9997
sendCookedData = true

##############
# props.conf #
##############

# capture data from sourcetype auditd

[auditd] 
TRANSFORMS-auditd = myindexers

###################
# transforms.conf #
###################

[myindexers]
REGEX = *
DEST_KEY=_TCP_ROUTING 
FORMAT = myindexers

This would route all data for that sourcetype to "myindexers". You could repeat that for each relevant sourcetype. Further, you can also filter that data further using regex to capture what you want and route the rest to nullqueue:

http://docs.splunk.com/Documentation/Splunk/7.0.2/Forwarding/Routeandfilterdatad#Keep_specific_event...

##############
# props.conf #
##############

# capture data from sourcetype auditd

[auditd] 
TRANSFORMS-auditd = myindexers,nixnull

###################
# transforms.conf #
###################

[myindexers]
REGEX = <some-regex-for-data-I-want
DEST_KEY=_TCP_ROUTING 
FORMAT = myindexers

[nixnull]
REGEX = *
DEST_KEY = queue
FORMAT = nullQueue

One important thing to keep in mind, with default settings in place, if the connection to the "myindexer" group is blocked, the queues will get blocked and it could result in no data being indexed on the customers indexers. If it is important to capture the data but acceptable to have some data loss if the connection drops, you could utilize the following setting in outputs.conf:

dropEventsOnQueueFull = <integer>
* If set to a positive number, wait <integer> seconds before throwing out
  all new events until the output queue has space.
* Setting this to -1 or 0 will cause the output queue to block when it gets
  full, causing further blocking up the processing chain.
* If any target group's queue is blocked, no more data will reach any other
  target group.
* Using auto load-balancing is the best way to minimize this condition,
  because, in that case, multiple receivers must be down (or jammed up)
  before queue blocking can occur.
* Defaults to -1 (do not drop events).
* DO NOT SET THIS VALUE TO A POSITIVE INTEGER IF YOU ARE MONITORING FILES!

For example:

 #################
 # outputs.conf #
 #################
[tcpout]
defaultGroup = myindexers

[tcpout:myindexers]
server = indexer01:9997
sendCookedData = true
dropEventsOnQueueFull = 60

Again this is very basic, perhaps I haven't had enough coffee yet and there is an error in there but it should give you an approach to test and the documents related to that.

--edit, and also I would keep the index name seperate, it is likely better to keep their data isolated instead of intermingled with yours.

Jacob
Sr. Technical Support Engineer

ansif
Motivator

Whats the challenge on mimic their index name and define in your environment. Do you want to keep all security logs in same index?

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...