Getting Data In

Metadata will not rewrite. Why is Splunk ignoring my configurations?

brent_weaver
Builder

I am trying [once again] to rewrite metadata, host, source and source type from fields in my event.

I have an event like:

   {    [-] 
         datasource:     otherport  
         ident:  root   
         message:    This is a test 
         orighost:   play   
    }

Note: My initial source type is fluentd when the event comes in.

I created an app and put my config in $SPLUNK_HOME/apps/fluentd/default directory and have a props and a transforms that don't do anything, yet they work perfectly on another host. The data is coming in through a syslog cp port 9999, so initial source is tcp:9999.

props.conf:

[fluentd]
SHOULD_LINEMERGE = false
INDEXED_EXTRACTIONS = json
TRANSFORMS-updateMetaData = autohost,, autosource, autoparse

transforms.conf:

[autosource]
SOURCE_KEY = field:datasource
REGEX = (.*)
FORMAT = sourcetype::$1
DEST_KEY = MetaData:Sourcetype

[autohost]
SOURCE_KEY = field:orighost
REGEX = (.*)
FORMAT = host::$1
DEST_KEY = MetaData:Host

[autoparse]
SOURCE_KEY = field:message
REGEX = (.*)
FORMAT = $1
DEST_KEY = _raw

Any help is appreciated, I cannot figure out why Splunk ignores this config!

0 Karma

brent_weaver
Builder

There is no other local folder in this config. It is a bare bones install in a VM with no other configs. Once thing I realized is that the precedence in my transforms.conf is wrong. In order for me to get fields I need to first rewrite _raw which is a son string, then I thought it would work, but it is not ...

Any more thoughts?

0 Karma

mattymo
Splunk Employee
Splunk Employee

can you provide the output of:

./splunk btool inputs list tcp --debug

and

./splunk btool props list fluentd --debug

One thing I could think of would be that the json indexed extractions aren't happening for some reason? Your sourcetype is relying on fields to be present to make the re-writes, so maybe its not available....hard to say for me without seeing the system.

Do you have some sample events?

- MattyMo
0 Karma

brent_weaver
Builder
splunk[/opt/splunk/etc/apps/fluentd/local] $ splunk btool inputs list tcp --debug
/opt/splunk/etc/system/default/inputs.conf    [tcp]
/opt/splunk/etc/system/default/inputs.conf    _rcvbuf = 1572864
/opt/splunk/etc/system/default/inputs.conf    acceptFrom = *
/opt/splunk/etc/system/default/inputs.conf    connection_host = dns
/opt/splunk/etc/system/local/inputs.conf      host = splunk
/opt/splunk/etc/system/default/inputs.conf    index = default
/opt/splunk/etc/apps/search/local/inputs.conf [tcp://9999]
/opt/splunk/etc/system/default/inputs.conf    _rcvbuf = 1572864
/opt/splunk/etc/apps/search/local/inputs.conf connection_host = dns
/opt/splunk/etc/system/local/inputs.conf      host = splunk
/opt/splunk/etc/system/default/inputs.conf    index = default
/opt/splunk/etc/apps/search/local/inputs.conf sourcetype = _undefined

splunk[/opt/splunk/etc/apps/fluentd/local] $ splunk btool props list fluentd --debug
/opt/splunk/etc/apps/search/local/props.conf [fluentd]
/opt/splunk/etc/system/default/props.conf    ANNOTATE_PUNCT = True
/opt/splunk/etc/system/default/props.conf    AUTO_KV_JSON = true
/opt/splunk/etc/system/default/props.conf    BREAK_ONLY_BEFORE =
/opt/splunk/etc/system/default/props.conf    BREAK_ONLY_BEFORE_DATE = True
/opt/splunk/etc/system/default/props.conf    CHARSET = UTF-8
/opt/splunk/etc/apps/search/local/props.conf DATETIME_CONFIG =
/opt/splunk/etc/system/default/props.conf    HEADER_MODE =
/opt/splunk/etc/system/default/props.conf    LEARN_MODEL = true
/opt/splunk/etc/system/default/props.conf    LEARN_SOURCETYPE = true
/opt/splunk/etc/system/default/props.conf    LINE_BREAKER_LOOKBEHIND = 100
/opt/splunk/etc/system/default/props.conf    MATCH_LIMIT = 100000
/opt/splunk/etc/system/default/props.conf    MAX_DAYS_AGO = 2000
/opt/splunk/etc/system/default/props.conf    MAX_DAYS_HENCE = 2
/opt/splunk/etc/system/default/props.conf    MAX_DIFF_SECS_AGO = 3600
/opt/splunk/etc/system/default/props.conf    MAX_DIFF_SECS_HENCE = 604800
/opt/splunk/etc/system/default/props.conf    MAX_EVENTS = 256
/opt/splunk/etc/system/default/props.conf    MAX_TIMESTAMP_LOOKAHEAD = 128
/opt/splunk/etc/system/default/props.conf    MUST_BREAK_AFTER =
/opt/splunk/etc/system/default/props.conf    MUST_NOT_BREAK_AFTER =
/opt/splunk/etc/system/default/props.conf    MUST_NOT_BREAK_BEFORE =
/opt/splunk/etc/apps/search/local/props.conf NO_BINARY_CHECK = true
/opt/splunk/etc/system/default/props.conf    SEGMENTATION = indexing
/opt/splunk/etc/system/default/props.conf    SEGMENTATION-all = full
/opt/splunk/etc/system/default/props.conf    SEGMENTATION-inner = inner
/opt/splunk/etc/system/default/props.conf    SEGMENTATION-outer = outer
/opt/splunk/etc/system/default/props.conf    SEGMENTATION-raw = none
/opt/splunk/etc/system/default/props.conf    SEGMENTATION-standard = standard
/opt/splunk/etc/apps/search/local/props.conf SHOULD_LINEMERGE = false
/opt/splunk/etc/system/default/props.conf    TRANSFORMS =
/opt/splunk/etc/system/default/props.conf    TRUNCATE = 10000
/opt/splunk/etc/apps/search/local/props.conf category = Custom
/opt/splunk/etc/system/default/props.conf    detect_trailing_nulls = false
/opt/splunk/etc/system/default/props.conf    maxDist = 100
/opt/splunk/etc/system/default/props.conf    priority =
/opt/splunk/etc/apps/search/local/props.conf pulldown_type = 1
/opt/splunk/etc/system/default/props.conf    sourcetype =
0 Karma

mattymo
Splunk Employee
Splunk Employee

why is the sourcetype _undefined on your 9999 input?
also INDEXED_EXTRACTIONS=JSON is not set in your props?
Also not seeing your transforms being set in props....

- MattyMo
0 Karma

DalJeanis
Legend

...why are there two commas together in the TRANSFORMS line?

0 Karma

brent_weaver
Builder

Hey good catch, i fixed that after the posting and still no workie workie.... I think that has something to do with the fact that this is a tcp input/!?!? My other system is getting these transactions from a kinesis stream.

0 Karma

mattymo
Splunk Employee
Splunk Employee

oh yeah..is that actually your props file?

- MattyMo
0 Karma

brent_weaver
Builder

yes it is!

0 Karma

mattymo
Splunk Employee
Splunk Employee

Maybe move the props and transforms to the local directory in your app to bump it up the precedence list?? If I understand you correctly, that this config is working on another host, thats what I'd check..

http://docs.splunk.com/Documentation/Splunk/6.5.2/Admin/Wheretofindtheconfigurationfiles

- MattyMo
0 Karma

brent_weaver
Builder

Thank you for the response, but this is not the case. I have already tried to put them in local, but it should not matter as there is nothing in local, in fact there is not even a local directory. I am stuck on this having something to do with the type of input. I am at a total loss and it seems that splunk is completely ignoring this config for some reason.

0 Karma

mattymo
Splunk Employee
Splunk Employee

It would matter, If there is another app with a local folder, it will win against this fluentd config if it conflicts ( please review file precedence)....but moving on...

What does the data look like in the index at this point? what sourcetype is being applied? Syslog? Fluentd? json?

Can we see your inputs.conf please? Is any other sourcetype coming in on this port?

Is this a standalone Splunk deployment or distributed? If the data is being caught on a tcp port by a forwarder, does it have this props? Because you are using indexed extractions, you need to ensure the forwarder has the props.

https://docs.splunk.com/Documentation/Splunk/6.5.2/Data/Extractfieldsfromfileswithstructureddata

see caveats section

- MattyMo
0 Karma

brent_weaver
Builder

Hey thanks for getting back to me. This is a standalone install of spunk, and all-in-one. This ia my "lab" server and it's running in a VM on my mac. I have fluentd installed right locally on the machine and it loops back to tcp port 9999 to write to spunk. I do have a props for the tcp input by virtue of the fact that i am setting the input source type as fluentd, so in spunk UI it does show as sourcetype=fluentd. So I setup a props with [fluentd] as a stanza. There is no forwarder involved here, from a spunk standpoint it is just another raw syslog feed.

Let me know cause' this is killing me

0 Karma

mattymo
Splunk Employee
Splunk Employee

ok cool, will try it in the lab and see if I can figure out whats up

- MattyMo
0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...