Getting Data In

Match 2 stanzas in props.conf

tlmayes
Contributor

I have a requirement to send certain windows events to BOTH the indexers AND a remote syslog using TCP.

- The indexers should receive the events in standard Windows multiline format
- The remote syslog should receive the events in single line format

I have a configuration that works, forwarding all events in one or the other formats (Windows multiline, or syslog single line) but not both. How can I write props.conf so the same event is sent to the indexers in one format, and the syslog in another? If I remove the SEDCMD, everything is received in both locations in Windows format. If I include, everything is single line. How can have both depending on destination, not source?

PROPS.CONF
[default]
TRANSFORMS-routing=Everything

[source::WinEventLog:*]
TRANSFORMS-routing=send_to_syslog
SEDCMD-rmlines=s/[\n\r\t]/ /g
Tags (1)
0 Karma
1 Solution

maciep
Champion

I have a similar need, but I send the data from the universal forwarders to both my indexer cluster and a separate heavy forwarder. The heavy forwarder filters, parses and forwards the data to the remote syslog receiver. The events sent directly to the indexer cluster are handled normally. I'm not sure if introducing a heavy forwarder is an option for you.

If not, maybe cloning the events is. I haven't used it myself, but it sounds like it would allow you to clone those events to a new sourcetype that you could then parse/filter/forward how you need?

This is a setting in transforms.conf

CLONE_SOURCETYPE = <string>
* This name is wrong; a transform with this setting actually clones and
  modifies events, and assigns the new events the specified sourcetype.
* If CLONE_SOURCETYPE is used as part of a transform, the transform will
  create a modified duplicate event, for all events that the transform is
  applied to via normal props.conf rules.
* Use this feature if you need to store both the original and a modified
  form of the data in your system, or if you want to send the original and a
  modified form to different outbound systems.
  * A typical example would be to retain sensitive information according to
    one policy and a version with the sensitive information removed
    according to another policy.  For example, some events may have data
    that you must retain for 30 days (such as personally identifying
    information) and only 30 days with restricted access, but you need that
    event retained without the sensitive data for a longer time with wider
    access.
* Specifically, for each event handled by this transform, a near-exact copy
  is made of the original event, and the transformation is applied to the
  copy.  The original event will continue along normal data processing
  unchanged.
* The <string> used for CLONE_SOURCETYPE selects the sourcetype that will be
  used for the duplicated events.
* The new sourcetype MUST differ from the the original sourcetype.  If the
  original sourcetype is the same as the target of the CLONE_SOURCETYPE,
  Splunk will make a best effort to log warnings to splunkd.log, but this
  setting will be silently ignored at runtime for such cases, causing the
  transform to be applied to the original event without cloning.
* The duplicated events will receive index-time transformations & sed
  commands all transforms which match its new host/source/sourcetype.
  * This means that props matching on host or source will incorrectly be
    applied a second time. (SPL-99120)
* Can only be used as part of of an otherwise-valid index-time transform.  For
  example REGEX is required, there must be a valid target (DEST_KEY or
  WRITE_META), etc as above.

View solution in original post

maraman_splunk
Splunk Employee
Splunk Employee

you need to clone in order to have one pristine and one transformed event.
Either by using the CLONE_SOURCETYPE or by cloning to a separate instance (HF) that will do the SEDCMD + send as syslog

0 Karma

maciep
Champion

I have a similar need, but I send the data from the universal forwarders to both my indexer cluster and a separate heavy forwarder. The heavy forwarder filters, parses and forwards the data to the remote syslog receiver. The events sent directly to the indexer cluster are handled normally. I'm not sure if introducing a heavy forwarder is an option for you.

If not, maybe cloning the events is. I haven't used it myself, but it sounds like it would allow you to clone those events to a new sourcetype that you could then parse/filter/forward how you need?

This is a setting in transforms.conf

CLONE_SOURCETYPE = <string>
* This name is wrong; a transform with this setting actually clones and
  modifies events, and assigns the new events the specified sourcetype.
* If CLONE_SOURCETYPE is used as part of a transform, the transform will
  create a modified duplicate event, for all events that the transform is
  applied to via normal props.conf rules.
* Use this feature if you need to store both the original and a modified
  form of the data in your system, or if you want to send the original and a
  modified form to different outbound systems.
  * A typical example would be to retain sensitive information according to
    one policy and a version with the sensitive information removed
    according to another policy.  For example, some events may have data
    that you must retain for 30 days (such as personally identifying
    information) and only 30 days with restricted access, but you need that
    event retained without the sensitive data for a longer time with wider
    access.
* Specifically, for each event handled by this transform, a near-exact copy
  is made of the original event, and the transformation is applied to the
  copy.  The original event will continue along normal data processing
  unchanged.
* The <string> used for CLONE_SOURCETYPE selects the sourcetype that will be
  used for the duplicated events.
* The new sourcetype MUST differ from the the original sourcetype.  If the
  original sourcetype is the same as the target of the CLONE_SOURCETYPE,
  Splunk will make a best effort to log warnings to splunkd.log, but this
  setting will be silently ignored at runtime for such cases, causing the
  transform to be applied to the original event without cloning.
* The duplicated events will receive index-time transformations & sed
  commands all transforms which match its new host/source/sourcetype.
  * This means that props matching on host or source will incorrectly be
    applied a second time. (SPL-99120)
* Can only be used as part of of an otherwise-valid index-time transform.  For
  example REGEX is required, there must be a valid target (DEST_KEY or
  WRITE_META), etc as above.

tlmayes
Contributor

Thanks for the response. Learned the hard way what may have been obvious, that what I was trying to do was not possible. I came to the same conclusion as you did, that an additional HF is necessary, which I am deploying now.

Thanks again

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...