I'm trying to configuring Splunk Universal Forwarder to send logs to Logstash.
I only have access to the Universal Forwarder (not the Heavy Forwarder), and I need to forward audit logs from several databases, including MySQL, PostgreSQL, MongoDB, and Oracle. So far, I’ve been able to send TCP syslogs to Logstash using the Universal Forwarder.
Additionally, I’ve successfully connected to MySQL using Splunk DB Connect but I’m not receiving any logs from it to Logstash. I would appreciate any advice on forward database audit logs through the Universal Forwarder to Logstash in real time or is there any provision of creating a sink or something?
Any help or examples would be great! Thanks in advance.
Perhaps an irrelevant question, but did you engage your local infrastructure or security team; or whomever is managing Splunk?
If your org does not have a proper process for making these sorts of adjustments because you only have access to a UF, there may be larger challenges to tackle. Every Splunk deployment is different, so providing more specific advice is harder when you cannot spell out all of the cans and cant dos, since at least some of the possible solutions mean that you have proper administrative access to all of your relevant Splunk components, of which a HF is very much an important piece.
That aside, you should be working with your Splunk admins to get the right configurations in place.
Hi John,
We are not using Splunk directly, but we are developing a tool for a customer who does. The customer has data sources connected to Splunk, and they want to forward the audit logs from these data sources using the Universal Forwarder. We're trying to figure out how to forward these audit logs to Logstash, either via TCP or Filebeat. Do you have any suggestions on how to achieve this?
I’ve managed to receive syslog data on my local laptop using the Universal Forwarder with the following outputs.conf:
[tcpout]
[tcpout:fastlane]
server = 127.0.0.1:514
sendCookedData = false
I’ve also connected MySQL to Splunk using DB Connect to test it out, but I’m not receiving any MySQL logs.
OK. Once again - did you "connect MySQL to Splunk using DB Connect" on the Universal Forwarder? How?
I did connect MySQL to Splunk using DBConnect but not on the Universal Forwarder, i do not know how I can connect a DB on UF. Also, I am still figuring out how I can send the audit logs for connected DB using Universal Forwarder
Pete,
I guess I don't quite understand what you're trying to do.
Are you trying to develop a Splunk app? If so, you should have your own Splunk instance with relevant components to replicate the data.
Is this data going back into the same Splunk instance or are you trying to get the Splunk data into some external non-SplunkDB?
I guess, I'm also asking for how this is architected and what data is being moved from where to where (including ports and data types, etc)?
Also per Splunkbase:
What can DB Connect do?
Database import - Splunk DB Connect allows you to import tables, rows, and columns from a database directly into Splunk Enterprise, which indexes the data. You can then analyze and visualize that relational data from within Splunk Enterprise just as you would the rest of your Splunk Enterprise data.
Database export - DB Connect also enables you to output data from Splunk Enterprise back to your relational database. You map the Splunk Enterprise fields to the database tables you want to write to.
Database lookups - DB Connect also performs database lookups, which let you reference fields in an external database that match fields in your event data. Using these matches, you can add more meaningful information and searchable fields to enrich your event data.
Database access - DB Connect also allows you to directly use SQL in your Splunk searches and dashboards. Using these commands, you can make useful mashups of structured data with machine data.
I apologize for the confusion. I am trying to send the data to a third-party application (not a Splunk database). Currently, I am working to connect various databases to Splunk, as the customer already has databases connected.
My goal is to forward the audit logs from each database which is connected to Splunk to our system. We are using Logstash to receive the data, but since Logstash does not have a Splunk input plugin, I am attempting to use TCP to receive data and send it from the Universal Forwarder to a TCP port, which can be set to 514.
The architecture looks like this: Splunk (with databases) -> Universal Forwarder -> Logstash -> Analysis.
OK. It seems... overly complicated. As I understand it, you have customer who has DBConnect's inputs from which data is pulled from production databases, right? But no audit logs, right?
And now you want to pull the audit logs which are not gonna be sent to Splunk and send them away?
That makes no sense. (also I'm not sure all databases actually store audit data in databases themselves; as far as I remember, MySQL logged audit events to normal flat text files; but I haven't worked with it for quite some time so I might be wrong here).
Why not use something that will directly connect your source with your destination without forcing Splunk components into doing something they are not meant to do?
So, I just figured out that customer do not connect any DB to Splunk using DBConnect. They just use
Database native log -> Universal forwarder -> Splunk.
Your doubt is valid why not use directly from native but now that I see they do not connect their DB to Splunk and just forward the logs, they can configure UF in such a way it can send us logs along with Splunk.
I highly doubt that you managed to
1) Run DBConnect on UF
2) Send data from UF to Logstash (logstash doesn't have Splunk input plugin and UF cannot send anything except S2S or S2S over HTTP).
Sorry, i am new to Splunk. Yes, I have only being able to connect mySql to DBConnect but i am not able to configure it to logstash.
Any idea how can i get the audit logs on logstash though TCP? UF can forward the log to TCP and logstash has a input plugin for TCP.
No.
1. DBConnect doesn't run on UF. DBConnect requires a "full" Splunk instance to run (some functionality runs on search-heads, some on heavy forwarders). So you couldn't have installed and run it on UF. As simple as that
2. Just because something is called "TCP Output" in Splunk terminology doesn't mean it produces a simple TCP raw data stream. The outputs specified with "tcp" stanza in outputs.conf use proprietary Splunk protocol to forward data to downstream components (indexers or intermediate forwarders). Logstash doesn't know how to handle that.
You could try to use HF with a syslog output to forward data to an external receiver. I think that's the only reasonable way you can think of.
Thank you for your response. I’m limited to using the Universal Forwarder (UF) only. From various resources, I’ve learned that UF can send raw data, which works for me.
I’ve successfully set up Logstash to receive syslog data from my Mac using the TCP input plugin. I’m now wondering how I can configure it to receive data based on the Source or SourceType. Alternatively, is there a way to send all data to Logstash in real-time?
Yes, right you are, there is the sendCookedData=false option. It's rarely used so I forgot about that.
But since you're sending raw data this way you don't get metadata associated with it.