We have installed Splunk in windows and we want to send windows logs from Search Head, LM and CM to 3rd party using an indexer, somehow those logs can be seen in Search head queries but indexer is not forwarding them to 3rd party.
Your architecture: UF-->IDX-->SH
:Two sites with CM/LM cluster : each site has 1 IDX, SH, CM, LM, for
standby site CM/LM splunk service is stopped.
2) Your configuration pertaining to data ingestion and data flow.: We are using as indexer to send the data to 3rd party, all the data is received at remote end except the Splunk win components, also able to send indexer server logs to 3rd party.
Thank you for your reply.
So, you are using [syslog] in outputs.conf on your indexers to send the data to Qradar? Is the other data you are sending to Qradar also being sent from the indexers, rather than the source? If so I guess this rules out connectivity issue.
Lastly, how have you configured the other data sources to send from the indexers to Qradar? Please share config examples of how you've achieved this so we can see if there is an issue here.
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
Hi,
Please find below,
So, you are using [syslog] in outputs.conf on your indexers to send the data to Qradar? Is the other data you are sending to Qradar also being sent from the indexers, rather than the source? If so I guess this rules out connectivity issue.
: yes using syslog,
[syslog:xx_syslog]
server = 1.x.1.2:514
type = udp
priority = <13>
timestampformat = %b %e %T
Lastly, how have you configured the other data sources to send from the indexers to Qradar? Please share config examples of how you've achieved this so we can see if there is an issue here.
Props.conf for cisco logs
[cisco:ios]
TRANSFORMS-soc_syslog_out = send_to_soc
Tranforms.conf
[send_to_soc]
REGEX = .
DEST_KEY = _SYSLOG_ROUTING
FORMAT = soc_syslog
Ok. If I understand you correctly, you are using UFs which send data directly to indexers. And those indexers index locally as well as send a copy to a syslog destination, right? And you're doing that by defining transforms manipulating _SYSLOG_ROUTING on the indexers. Do I get it right?
In this case, data already processed by other "full" Splunk Enterprise components (SHs, CM and so on) is _not_ processed by the indexers.
tl;dr - You must create syslog outputs and transforms for Splunk-originating events on the source servers (SHs, CM...) as well. You might be able to try to address your problem with ingest actions but I'm no expert here.
Longer explanation:
Data in Splunk can be in one of three "states". Normally an input reads raw data. This raw data - if received on UF - is split into chunks and sent to an output as so-called "cooked data". This data is not yet split into separate events, it's not timestamped... It's just chunks of raw data along with a very basic set of metadata.
If raw data from input or cooked data from UF is received by a "heavy" component (a full Splunk Enterprise instance, regardless of its role) it's getting parsed - the data is split into single events, timestamp is assigned to those events, indexed fields are extracted and so on.
At this point we have data which is "cooked and parsed", often called just "parsed" for short. Depending on server's role that data might be indexed locally or sent to output(s).
But if parsed data is received on an input it's not touched again except for earlier mentioned ingest actions. It's not reparsed again, no transforms are run on the data you receive in parsed form.
So if you're receiving internal data from your Splunk servers, that data ihas already been parsed on the source Splunk server - any transforms you have defined on your indexers do not apply to this data.
Hi @PickleRick I will try the below and update here. Thanks
@PickleRick SH, CM & LM don't have connectivity to the remote Qradar, only Indexer is configured the send the syslogs to the remote Qradar, so no point to configure syslog in SH, CM and LM right?
Yes. If you don't have "holes" in your firewall to send data directly from the other components to Qradar, it won't work.
You might try to use RULESET in props.conf on indexers instead of TRANSFORMS.
@PickleRick @livehybrid can i install Splunk UF in the SH, CM and LM, is it possible, also will it work? also will it cause duplicate logs from Splunk as well as from UF.
Technically, you might be able to. It might depend on your local limitations, chosen way of installing the software and so on. Technically if you bend over backwards you can even install multiple splunk instances on one host. That doesn't mean you should.
If you do so (I'm still advising against it), each instance will have its own set of inputs and outputs so if you - for example, just point your HF instance to indexers A and UF instance to indexers B, you will get _all_events from HF into indexers A (including _internal) and _all_ events from UF to indexers B.
EDIT: I still don't see how it would solve your problems of sending logs from the "non-indexer" hosts to remote third party solution without sending them directly there...
@PickleRick so is it better to send logs from SH, LM, and CM directly to the remote server as recommended earlier by configuring output.conf and props.conf, also will it increase the processing on SH, LM and CM?
If you try to follow KISS (keep it simple stupid) I said that you should do this forwarding/splitting to separate targets in 1st full Splunk Enterprise instance (in this case your SH, MC, LM etc). Just splits those to there. And in this case is suppose that you don't even need separate transforms/props conf to do this. Just add a new app where you are using own inputs.conf which contains this additional outputs.conf with _SYSLOG_ROUTING and add inputs.conf which are sending all events (via default output) as both.
But as you have QRadar as a target it might be need some modifications into log event? I cannot recall now what kind of syslog feed QRadar is needing? But if it support those default which splunk can send, you should use those. Otherwise you must add props+transforms to modify those events as needed.
It really depends on the details. It might be easier to use the RULESET functionality on the indexers. It might be easier to send the data directly from the SH/LM/CM/whatever to Qradar using another (non-Splunk) method. Each of those methods has its pros and cons, mostly tied to manageability and "cleanliness" of architecture.
To ensure we can answer thoroughly, please could you confirm a few things. Are you sending these logs to your own indexers *and* a 3rd party indexer(s)? Or just to the 3rd party?
You say you can see the data on your SH, when you search it please check the splunk_server field from the interesting fields on the left, is the server(s) listed here your indexers, or SH?
How have you configured the connectivity to the 3rd party?
Please could you check your _internal logs for any TcpOutputFd errors (assuming standard Splunk2Splunk forwarding).
🌟 Did this answer help you? If so, please consider:
Your feedback encourages the volunteers in this community to continue contributing
Hi ,
Are you sending these logs to your own indexers *and* a 3rd party indexer(s)? Or just to the 3rd party? : 3rd party (Qradar)
You say you can see the data on your SH, when you search it please check the splunk_server field from the interesting fields on the left, is the server(s) listed here your indexers, or SH?; Indexers
How have you configured the connectivity to the 3rd party?:; yes its forwarding other syslogs successfully
Ok. Wait.
You're asking about something not working in a relatively unusual setup.
So firstly describe with details:
1) Your architecture
2) Your configuration pertaining to data ingestion and data flow.
Without it we have no knowledge about your environment, we don't know what is working and what is not and what did you configure and where in an attempt to make it work and everybody involved will only waste time ping-ponging questions trying to understand your issue.