Deployment Architecture

streamfwd app error in /var/log/splunk/streamfwd.log

johnncennaa
Engager

Hello! I am trying to get the streamfwd app to capture traffic on an interface located on my virtual machine.

Does this app not recognize link layer virtualization? This is the error I am receiving and currently can't find a workaround...

"(SnifferReactor/PcapNetworkCapture.cpp:238)  stream.NetworkCapture - SnifferReactor unrecognized link layer for device <lo0>: 253"

I was also receiving the same error when I changed my streamfwd.conf to capture on a different network interface. Even tried putting the interface into promiscuous mode. Any help/troubleshooting on this would be appreciated! Fysa, I am using a 64bit CentOS8.

Labels (1)
Tags (1)

seanatrons
Loves-to-Learn
Edit Splunk systemd service unit file and edit/add the line under [service] 

 

AmbientCapabilities=CAP_DAC_READ_SEARCH CAP_NET_ADMIN CAP_NET_RAW​

 

0 Karma

milad001mehdi
New Member

Can you explain more? 

Which file should be edit?

Send path and file name 

0 Karma

patrickvanreck
Explorer

Hi Splunkers

I notice the same issue and wonder really why Splunk is not fixing this issue?
Is seems to be an incompatibility on the VMware stack with the streamfwd service. 
I use Splunk Universalforwarder 9.1.2 and Splunk Stream 9.1.1.

Specially the installation on Universalforwarders fails massively on Linux systems which makes Splunk Stream not really usable in a distributed environment with Linux systems.

My streamfwd.log tells always the same error:

 

2024-03-08 14:59:54 INFO  [139974317471680] (CaptureServer.cpp:2001) stream.CaptureServer - Starting data capture
2024-03-08 14:59:54 INFO  [139974317471680] (SnifferReactor/SnifferReactor.cpp:161) stream.SnifferReactor - Starting network capture: sniffer
2024-03-08 14:59:54 ERROR [139974317471680] (SnifferReactor/PcapNetworkCapture.cpp:238) stream.NetworkCapture - SnifferReactor unrecognized link layer for device <eth0>: 253
2024-03-08 14:59:54 FATAL [139974317471680] (CaptureServer.cpp:2337) stream.CaptureServer - SnifferReactor was unable to start packet capturesniffer
2024-03-08 14:59:54 INFO  [139974317471680] (CaptureServer.cpp:2362) stream.CaptureServer - Done pinging stream senders (config was updated)
2024-03-08 14:59:54 INFO  [139974317471680] (main.cpp:1109) stream.main - streamfwd has started successfully (version 8.1.1 build afdcef4b)
2024-03-08 14:59:54 INFO  [139974317471680] (main.cpp:1111) stream.main - web interface listening on port 8889

 

 

 


As you all can see, my streamfwd.conf is more or less the same as all of you have also.
No matter if for example i change the ipAddr to 0.0.0.0. I always get the same error.

 

 

[streamfwd]
logConfig = streamfwdlog.conf
port = 8889
ipAddr = 127.0.0.1
## --> Token HFWD
httpEventCollectorToken = ba4a2b2-2544-55e3-22ft-234vt68m0szp
## --> Specify the interface
streamfwdcapture.1.interface = eth0

 

 

 

Side remark:

If I reinstall Splunk Enterprise 9.1.2 on the same server on which UniversalForwarder 9.1.2 with Splunk Stream 9.1.1 was installed, Splunk Stream works.
That sounds like a bug in Splunk_TA_stream.

Would be great to hear a statement of Splunk within the next weeks.
Kind regards

Patrick

 

jratl2t
Loves-to-Learn

I'm have the same problem.  Multiple VMs with Stream that have been working, but they all now fail with "unrecognized link layer for this device <eth1> 253".   Does the current version no longer support link layer virtualization

0 Karma

jorob
Explorer

We finally got stream working - but more of a work around.  The problem is in part due to starting the UF using systemd, which allocates CPU slices for different processes.   When using systemd to start the UF, stream fails.   Disabling start on boot, and manually starting the UF from ./slunk start, stream works.

The second part is that when the UF starts, ownership of all the UF files is chowned  splunk:splunk.  This seems logical to ensure the UF runs as splunk (or splunkfwd).  However, when stream is initially installed, the set_permissions.sh changes ownership of ../Splunk_TA_stream/Linux_x86_64/streamfwd-rhel6 to root.  Starting the UF undoes this, changing ownership back to splunk.   We made streamfwwd-rhel6 immutable - which did prevent the ownership change back to splunk, but stream still failed when starting with systemd.

Ultimately, we had to disable systemd, make streamfwd-rhel6 immutable (after running set_permissions.sh), then start the UF manually via /splunk start.    

Splunk needs to fix this so stream works as expected without having to disable boot-start and set the immutable flag.

patrickvanreck
Explorer

Hi Jorob

I saw this option as well. But what if we don't want to run the Splunk daemon in /etc/init.d?
I mean, the problem should be well known by Splunk and since allmost a year we don't hear any improvements from them.

I'm a little disappointed on Splunk's part that they don't describe a workaround in the docs or even look for the solution. It looks like nobody at Splunk cares about this problem.

As I mentioned, I think it's a bad idea to have to install all universal forwarders in the “old” way just because Splunk Stream can't handle it.

We are all eagerly awaiting Splunk's response.

Greetings

 

0 Karma

adrojis
Loves-to-Learn Lots

Hi,

I have the same problem too, on my ubuntu VM with the interface ens33. If you find the solution, ping me please.

0 Karma

milad001mehdi
New Member

hello

I have this problem last week and this error occur

i searched in any communities but i didn't find any solution

i'm using ubuntu 64 bit

i checked both interface that connect to my forwarder. both of them had this problem and error

please help us if every one have solution

0 Karma
Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...