Getting Data In

Why is my props.conf for a specific sourcetype not working as expected?

matt144
Explorer

When placing my props and transforms on my production system, I am not getting expected results. It should be taking sourcetype webseal:syslog, which is ingested from /var/log/messages, and setting a new timestamp, host, and sourcetype. The timestamps are all different. The app is placed on our Heavy Forwarder (I know) in our Dev and production system. It works perfectly in Dev, but nothing in production.

Let's start with my props.conf, because I haven't confirmed any issue with transforms, and I know TIME_PREFIX isn't working:

[webseal:syslog]
TIME_PREFIX = ^\w{3}\s+\d+\s\d+\:\d{2}\:\d{2}\s\S+\s\S+\s
SHOULD_LINEMERGE = False
TRANSFORMS-host = webseal-host
TRANSFORMS-sourcetype = webseal-null, request-ST, isam-ST, lavender-ST, pdweb-ST

Here's what I've checked:

  • Btool on the HFs shows it is reading props and transforms
  • GUI on HFs shows it is reading props
  • I checked all my regex statements within splunk search and on regex101. all correct.
  • tried putting props and transforms statements within a different parsing app that is working. no luck.
  • tried putting props and transforms within system/local on HF. no luck.
  • tried putting app on indexers instead. no luck.
  • tried switching sourcetype name on inputs and props. no luck.
  • tried switching props stanza to [source::/var/log/messages]. no luck.
  • tried removing the app and setting only TIME_PREFIX through the gui on HF. no luck.

And yes, I restarted splunkd in between all my tests. I've run out of ideas, and don't have any options other than ingesting all these logs from one file.

0 Karma
1 Solution

matt144
Explorer

Thanks for everyone's help.

It turns out the admins installed a HF on the syslog server, rather than a UF. So the logs were coming in marked as parsed, and skipping directly to the index queue.

View solution in original post

0 Karma

matt144
Explorer

Thanks for everyone's help.

It turns out the admins installed a HF on the syslog server, rather than a UF. So the logs were coming in marked as parsed, and skipping directly to the index queue.

0 Karma

Anam
Community Manager
Community Manager

Thanks for coming back and sharing what the actual solution was. Please don't forget to click accept on your answer.

0 Karma

matt144
Explorer

I also just oneshot (with sourcetype )some sample log from the prod system to the dev system. The dev system read it correctly.

0 Karma

FrankVl
Ultra Champion

If it works in dev, but not in prod, can you perhaps highlight differences in those two environments?

Is the data actually getting indexed with the webseal:syslog sourcetype? Just to rule out a typo in the inputs.conf in your production environment...

0 Karma

matt144
Explorer

Dev is basically a simple SH-IDX-HF environment. In production we have a SH cluster, Indexer cluster, and two HFs. Outside of that we try to keep them as similar as possible, but there are way too many little differences, but nothing that I can think of that should make a difference.

It is being logged with webseal:syslog as the sourcetype. I also tried using the source too, so it's definitely not a typo.

0 Karma

mayurr98
Super Champion

can you provide sample events ?

0 Karma

matt144
Explorer

Sure. The last one without the timestamp is the one that goes to the nullqueue.

Jan 22 11:15:16 avc-abcdsa-0023 webseal-something01-httpclf-fds[60611] 10.10.102.42 10.123.10.5 MJDKDSA 22/Jan/2018:11:15:02 -0500 001095374 "GET /LavenderService/resources/Cases/123456/rss/ HTTP/1.1" 200 24699 - - "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; InfoPath.3; Microsoft Outlook 15.0.4981; ms-office; MSOffice 15)"
Jan 22 11:34:30 abc-jdfkld-0025 webseal-something01-stats-fds[61219] 2018-01-22-11:34:00.000+00:00I----- pdweb.authn total#011 : 0.032
Jan 19 00:08:16 abc-fjdks-0027 webseal-something01-msglog-abc[31479] 2018-01-19-00:08:02.897-05:00I----- 0x38CF0966 webseald WARNING wwa cdsso authn-failover.cpp 304 0x7f485be38700 -- DPWWA2406W Could not find the failover session ID in the user's failover token
Jan 22 11:38:15 abc-fjdks-0028 webseal-something01-stats-abc[59546] 2018-01-22-11:38:00.000+00:00I----- pdweb.threads 'default' total#011 : 1000
Jan 22 11:40:41 abc-jfkdd-a002 audispd: node=abc-jfkdd-a002.abc.local type=USER_END msg=audit(1516639241.153:6626356): pid=21293 uid=0 auid=0 ses=943618 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:session_close grantors=pam_loginuid,pam_keyinit,pam_limits acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'

0 Karma

FrankVl
Ultra Champion

So the purpose of that timeprefix is to get splunk to use the (more detailed) timestamp that is in the event, rather than the one at the start of the line?
Wondering if Splunk's automatic timestamp detection (since you don't specify a time format) is able to deal with all these formats here and whether it is able to deal with multiple formats coming in the same file. Then again, you say it is working correctly in DEV, does DEV also have all these different formats in 1 file?

Can you not use a syslog daemon to split these different logs into separate files, rather than having splunk sort out the 'mess'?

0 Karma

matt144
Explorer

Yes, the second timestamp is the more accurate timestamp. On our Dev system, which is the same version, Splunk does read all the different time formats correctly.

As of right now, I was told this is our only option as far as the syslog goes. We previously had it broken out.

0 Karma
Get Updates on the Splunk Community!

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...

Welcome to the Splunk Community!

(view in My Videos) We're so glad you're here! The Splunk Community is place to connect, learn, give back, and ...

Tech Talk | Elevating Digital Service Excellence: The Synergy of Splunk RUM & APM

Elevating Digital Service Excellence: The Synergy of Real User Monitoring and Application Performance ...