Getting Data In

Splunk Server Local OS Log Parsing

jharbrecht
Engager

We have a large number of hosts logging to Splunk via the Universal Forwarder. We also have the splunk servers including search heads, heavy forwarders and indexers logging their local OS logs to splunk as well. All systems are linux OS. We use a custom app to collect the local linux OS logs in /var/log. All hosts running the Universal Forwarder and the search heads and the heavy forwarders get the app from the deployment server so they all have the identical app to collect the linux os logs. Recently we wanted to divide up the indexes the logs are sent to based on processes. In our custom app on the indexers we created an entry in props and the transforms and deployed it. We then used the deployment server and pushed the new sourcetype out to all hosts. All of the hosts logs coming from the UF's worked fine and the indexers began to divide up the linux OS logs from them as expected. However the splunk search heads and heavy forwarders local linux OS logs continued to go to the old index even though their sourcetype did change to reflect the new sourcetype we created and deployed via the deployment server.

Question: why does this config work fine for the hosts using the UF but not the splunk servers themselves if they all have the same app installed from the same deployment server and are all logging to the same indexer?

props.conf

[company_linux_messages_syslog]
pulldown_type = 1
MAX_TIMESTAMP_LOOKAHEAD = 32
TIME_FORMAT = %b %d %H:%M:%S
TRANSFORMS-newindex = company_syslog_catchall, company_syslog, syslog-host
REPORT-syslog = syslog-extractions
SHOULD_LINEMERGE = False
category = Operating System
description = Format found within the Linux log file /var/log/messages


transforms.conf

[company_syslog]
DEST_KEY =_MetaData:Index
REGEX = ^[A-Z][a-z]{2}\s\d{1,2}\s\d{2}:\d{2}:\d{2}\s.*?\s*(docker|tkproxy|auditd|dockerd)\[
FORMAT = syslog

[company_syslog_catchall]
DEST_KEY =_MetaData:Index
REGEX = .
FORMAT = syslog_catchall
Labels (1)
0 Karma
1 Solution

maciep
Champion

If those are parse-time configurations, then I believe they take place on the first splunk enterprise  server.  In other words, your heavy forwarder and search heads handle that phase of event processing.  So you can deploy the props/transforms there as well.

https://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F

View solution in original post

0 Karma

PickleRick
SplunkTrust
SplunkTrust

One quick question - why don't you make sure that all those daemons log to separate files? You'd be able to set up different inputs for them. That's usually more convenient.

Yes, it requires some reconfiguration on the daemons' side.

0 Karma

maciep
Champion

If those are parse-time configurations, then I believe they take place on the first splunk enterprise  server.  In other words, your heavy forwarder and search heads handle that phase of event processing.  So you can deploy the props/transforms there as well.

https://wiki.splunk.com/Where_do_I_configure_my_Splunk_settings%3F

0 Karma
Get Updates on the Splunk Community!

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...