Share a Tip

using the Field Extractor in sourcetype is nix:syslog

kn450
Explorer

Hi everyone,

I’m currently working on creating field extractions using the Field Extractor in Splunk. However, I’ve run into an issue when the sourcetype is nix:syslog.

Here’s the situation:
I’m sending logs from multiple products through syslog, and they all share the same sourcetype = nix:syslog.
When I use the Field Extractor to create an extraction for one product, it automatically applies to all sources that use this sourcetype.
This causes incorrect extractions for the other products since their log formats are different.

What I’d like to do is create field extractions based on eventtype (or possibly source or host) instead of sourcetype.
Ideally, I’d like to define extractions per eventtype, since that would give me much better control and separation between products.

Is there a way to do this in Splunk — to have field extractions scoped or applied by eventtype (or source/host) instead of sourcetype?
Any advice or best practices would be greatly appreciated.

Thanks in advance!

Labels (1)
Tags (1)
0 Karma
1 Solution

livehybrid
SplunkTrust
SplunkTrust

Hi @kn450 

To scope search-time field extractions on host or source instead of sourcetype you can use the following syntax in your props.conf

[source::yourSourceName]
...
yourExtractionsHere
...

OR

[host::yourHostName]
...
yourExtractionsHere
...

For more info check out the docs on props.conf at https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/configuration-file-referenc...

[<spec>]
* This stanza enables properties for a given <spec>.
* A props.conf file can contain multiple stanzas for any number of
different <spec>.
* Follow this stanza name with any number of the following setting/value
pairs, as appropriate for what you want to do.
* If you do not set a setting for a given <spec>, the default is used.

<spec> can be:
1. <sourcetype>, the source type of an event.
2. host::<host>, where <host> is the host, or host-matching pattern, for an
event.
3. source::<source>, where <source> is the source, or source-matching
pattern, for an event.
4. rule::<rulename>, where <rulename> is a unique name of a source type
classification rule.
5. delayedrule::<rulename>, where <rulename> is a unique name of a delayed
source type classification rule.
These are only considered as a last resort
before generating a new source type based on the
source seen.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

View solution in original post

0 Karma

livehybrid
SplunkTrust
SplunkTrust

Hi @kn450 

To scope search-time field extractions on host or source instead of sourcetype you can use the following syntax in your props.conf

[source::yourSourceName]
...
yourExtractionsHere
...

OR

[host::yourHostName]
...
yourExtractionsHere
...

For more info check out the docs on props.conf at https://help.splunk.com/en/splunk-enterprise/administer/admin-manual/9.4/configuration-file-referenc...

[<spec>]
* This stanza enables properties for a given <spec>.
* A props.conf file can contain multiple stanzas for any number of
different <spec>.
* Follow this stanza name with any number of the following setting/value
pairs, as appropriate for what you want to do.
* If you do not set a setting for a given <spec>, the default is used.

<spec> can be:
1. <sourcetype>, the source type of an event.
2. host::<host>, where <host> is the host, or host-matching pattern, for an
event.
3. source::<source>, where <source> is the source, or source-matching
pattern, for an event.
4. rule::<rulename>, where <rulename> is a unique name of a source type
classification rule.
5. delayedrule::<rulename>, where <rulename> is a unique name of a delayed
source type classification rule.
These are only considered as a last resort
before generating a new source type based on the
source seen.

🌟 Did this answer help you? If so, please consider:

  • Adding karma to show it was useful
  • Marking it as the solution if it resolved your issue
  • Commenting if you need any clarification

Your feedback encourages the volunteers in this community to continue contributing

0 Karma

kn450
Explorer

Thanks in advance for the help.
I’ve created an eventtype="auth_successful" for VPN  logs and I’d like to create field extractions based on this eventtype.

I tried using a stanza like this in props.conf:

[eventtype=auth_successful]
EXTRACT-fields = ...


but it didn’t work.

It seems Splunk doesn’t support applying props directly to eventtypes (like it does for sourcetype, source, or host).

Can someone confirm if this is the case, and what’s the best way to scope extractions so they only apply to events matching this eventtype?

Thanks!

0 Karma

PrewinThomas
Motivator

@kn450 

Whats your setup? Multiple products sending syslog traffic to splunk syslog receiver? or do you have any syslog-ng/rsyslog setup?

If you have multiple products better to use syslog-ng/ryslog to listen syslog port(single or dedicated port) and filter incoming messages by pattern, host, or facility and then send them to different destinations.

Eg:

# Source: listen for syslog on UDP 514
source s_net {
    udp(ip(0.0.0.0) port(514));
};

# Filters based on client IP
filter f_productA { netmask(192.168.x.x/32); };
filter f_productB { netmask(192.168.x.x/32); };

# Destinations: write to different files
destination d_productA {
    file("/var/log/productA.log");
};

destination d_productB {
    file("/var/log/productB.log");
};

# Log paths
log { source(s_net); filter(f_productA); destination(d_productA); };
log { source(s_net); filter(f_productB); destination(d_productB); };


Then use splunk inputs.conf for each product and assign correct sourcetype,index....

If you can’t split at syslog, you can still do it on the Splunk side. Use props.conf with a TRANSFORMS stanza to rewrite the sourcetype based on a regex match

Eg:

# props.conf
[nix:syslog]
TRANSFORMS-set_sourcetype = set_productA, set_productB

# transforms.conf
[set_productA]
REGEX = ProductA
DEST_KEY = MetaData:Sourcetype
FORMAT = sourcetype::productA_syslog

[set_productB]
REGEX = ProductB
DEST_KEY = MetaData:Sourcetype
FORMAT = sourcetype::productB_syslog

 

Regards,
Prewin
🌟If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!

0 Karma

kn450
Explorer

Hi @prewin,
Thanks for the detailed explanation.

For the first method — I’m using Splunk Connect for Syslog (SC4S), so I’m not exactly sure where I can modify or split the sources there as in the syslog-ng example.

As for the second method (using props.conf and transforms.conf), I tried it but it didn’t work for me.
I believe the reason is that my data is already indexed, and as far as I understand, these configurations are only applied at index time, typically on the forwarder or heavy forwarder before the data reaches the indexer.

0 Karma
Get Updates on the Splunk Community!

Splunk Observability as Code: From Zero to Dashboard

For the details on what Self-Service Observability and Observability as Code is, we have some awesome content ...

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Shape the Future of Splunk: Join the Product Research Lab!

Join the Splunk Product Research Lab and connect with us in the Slack channel #product-research-lab to get ...