Deployment Architecture

Splunk UF inputs.conf changed to new index: Why is source still being forward to old index in Splunk Cloud?

skeer007
Explorer

So a few days ago I typo'd the index name in the inputs.conf file on serverA running the universal forwarder, and inadvertently sent the log data to IndexB. We discovered it a few hours ago and since then I have verified multiple times that serverA's inputs is correct now, restarted splunk.

It's been about an hour and this one inputs stanza is still somehow forwarding log data to IndexB instead of IndexA. 

I have grepped splunkd.log but all I can see is it parsing the monitored file correctly..nothing about it's destination index.

Any help would be appreciated!

Labels (2)
0 Karma

skeer007
Explorer

@isoutamo  Apologies for abandoning this. I do believe it was processing latency on the Splunk Cloud side, I had re-checked the next day and events had filled in.

0 Karma

skeer007
Explorer

Almost forgot.. we do have a HF however I don;t believe it's affecting this since we're sending directly from the source -> UF -> SC

0 Karma

isoutamo
SplunkTrust
SplunkTrust

As lag is less than 5s then events are definitely new.

You are absolutely sure that those events are coming from this nodeA and nowhere else? Have you check that those are not coming when you stop UF on nodeA?

Can you double check (with btool) outputs.conf on nodeA ,that it's sending directly to SC instead of HF?

Those two reasons are only ones which comes to my mind for this issue.

BTW: don't run UF as root (this is a security issue). Use instead of e.g. splunk-user and then use setfacl to get access to log files.

 

0 Karma

skeer007
Explorer

Is there maybe something I can check in the SC side on this?  I just tried creating an entirely new index, changing the inputs.conf and still no dice 😞

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Have you checked that you haven’t any app/transforms.conf on SC side which drop events or change that index?

Are all events going to wrong index or only some?

0 Karma

skeer007
Explorer

I'm pretty sure.. when I query SC I'm using

index=indexA host=serverA

The events returned also show the source as the asa.log file in question. I have visually checked the other 4 UF servers and the few that have stanzas to monitor an identical path/filename are set to us their own unique index names. 

What I do find really odd is, if I tail -f this asa.log file on serverA, it's SUPER busy. Many, many events, however most of those are not hitting indexB.. only some. The log events in SC do contain the IP address of the source feeding asa.log on serverA though.

That might be a difference in raw syslog in the log file, but cisco:asa sourcetype sent to SC? IDK to be honest.

 

But regardless, I'm trusting the host field in SC for these events as being accurate.

 

16/09/2022
13:22:04.000	
Sep 16 08:22:04 x.x.x.x %ASA-5-713904: IP = y.y.y.y, Received encrypted packet with no matching SA, dropping
	
host:
serverA	
source:
/var/log/remote-syslog/asa.log	
sourcetype:
cisco:asa	
Time		
_time:
2022-09-16T13:22:04.000+00:00	
Default	
index:
indexB	
linecount:
1	
splunk_server:
idx-i-xxxxxxxx.splunkcloud.com	
0 Karma

skeer007
Explorer

I thought about latency.. however I did run

./splunk list inputstatus

And this particular input stanza was showing:

/var/log/remote-syslog/asa.log
file position = 856704
file size = 856704
percent = 100.00
type = open file

Should that mean that SC is caught up and only receiving new events and if so, any delay should be pretty low.. or at least not on the order of 10+ minutes.

Running your suggestion I get:

root@serverA:/opt/splunkforwarder/bin# ./splunk btool inputs list --debug monitor:///var/log/remote-syslog/asa.log
/opt/splunkforwarder/etc/apps/200_syslog_forwarders/local/inputs.conf [monitor:///var/log/remote-syslog/asa.log]
/opt/splunkforwarder/etc/system/default/inputs.conf                   _rcvbuf = 1572864
/opt/splunkforwarder/etc/apps/200_syslog_forwarders/local/inputs.conf disabled = 1
/opt/splunkforwarder/etc/system/default/inputs.conf                   host = $decideOnStartup
/opt/splunkforwarder/etc/apps/200_syslog_forwarders/local/inputs.conf index = indexA
/opt/splunkforwarder/etc/apps/200_syslog_forwarders/local/inputs.conf sourcetype = cisco:asa

Which is correct..  And as I type this this stanza has been disabled for almost 15 minutes, my query in SC is still showing events from serverA host coming in, it's boggling!

0 Karma

isoutamo
SplunkTrust
SplunkTrust

"./splunk list inputstatus" shows only what UF has done, it's not told that events are already in indexers unless you have use useACK on outputs conf.

You should check lag on SH side something like

index=<your index(es)> source=<your source> host=<your host>
| eval lag = _indextime - _time
| stats avg(lag) as aLag max(lag) as mLag
| eval aLagHuman = tostring(aLag, "duration"), mLagHuman = tostring(mLag, "duration")

r. Ismo

 

0 Karma

skeer007
Explorer

Ah nice query, So lag range for this host is .8 - 2.7 seconds.

0 Karma

isoutamo
SplunkTrust
SplunkTrust

Have you any HF on path to SC which can do transforms? 

Another reason could be that you have so long lag on receiving events, that those which are now coming to SC are collected and sent before that change?

Also use 

splunk btool inputs list --debug monitor:< your source file > 

on UF to check that you have  done change to correct inputs.conf file. Of course if this has managed by DS then you must do the change on DS side.

r. Ismo

0 Karma
Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...