Hi @abi2023, If you need to mask data pre-transit for whatever reason and the force_local_processing setting doesn't meet your requirements, you can use the unarchive_cmd props.conf setting to strea...
See more...
Hi @abi2023, If you need to mask data pre-transit for whatever reason and the force_local_processing setting doesn't meet your requirements, you can use the unarchive_cmd props.conf setting to stream inputs through Perl, sed, or any command or script that reads input from stdin and writes output to stdout. For example, to mask strings that might be IPv4 addresses in a log file using Perl: /tmp/foo.log This is 1.2.3.4.
Uh oh, 5.6.7.8 here.
Definitely not an IP address: a.1.b.4.
512.0.1.2 isn't an IP address. Oops. inputs.conf [monitor:///tmp/foo.log]
sourcetype = foo props.conf [source::/tmp/foo.log]
unarchive_cmd = perl -pe 's/[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}/*.*.*.*/'
sourcetype = preprocess-foo
NO_BINARY_CHECK = tru
[preprocess-foo]
invalid_cause = archive
is_valid = False
LEARN_MODEL = false
[foo]
DATETIME_CONFIG = NONE
SHOULD_LINEMERGE = false
LINE_BREAKER = ([\r\n]+)
EVENT_BREAKER_ENABLE = true
EVENT_BREAKER = ([\r\n]+) The transmitted events will be masked before they're sent to the receiver: Regular expressions that work with SEDCMD should work with Perl without modifications. The unarchive_cmd setting is a flexible alternative to scripted and modular inputs. The sources do not have to be archive files. As others have noted, you can deploy different props.conf configurations to different forwarders. Your props.conf settings for line breaks, timestamp extraction, etc., should be deployed to the next downstream instance of Splunk Enterprise (heavy forwarder or indexer) or Splunk Cloud.