Hello,
I'm hoping someone is able to help me find out what's going on with Splunk Stream and Netflow because I'm tearing my hair out trying to get it working.
I have a separate indexer and search head and am trying to use the independent stream forwarder. The forwarder host also has UF installed but not Splunk_TA_stream, incidentally I tried getting it working with the Splunk_TA_stream app and was also seeing similar results.
SH configuration:
Splunk app for stream installed and configured as per https://docs.splunk.com/Documentation/StreamApp/7.3.0/DeployStreamApp/UseStreamtoingestNetflowandIPF...
Indexer configuration:
$SPLUNK_HOME/etc/apps/splunk_httpinput/local/inputs.conf
[http]
disabled = 0
port = 8088
dedicatedIoThreads = 8
[http://streamfwd]
description = Splunk Stream HEC
disabled = 0
index = main
token = <hec_token>
indexes = _internal,main
[splunk@<indexer> ~]$ netstat -antup | grep 8088
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:8088 0.0.0.0:* LISTEN 11580/splunkd
Independent forwarder setup:
/opt/streamfwd/local/inputs.conf
[streamfwd://streamfwd]
splunk_stream_app_location = https://<search_head>:8000/en-us/custom/splunk_app_stream/
stream_forwarder_id =
disabled = 0
/opt/streamfwd/local/streamfwd.conf
[streamfwd://streamfwd]
authToken = <auth_token_generated_by_curl_config>
[streamfwd]
httpEventCollectorToken = <HEC_TOKEN>
processingThreads = 4
indexer.0.uri = https://<indexer>:8088
netflowReceiver.0.port = 9996
netflowReceiver.0.decoder = netflow
netflowReceiver.0.ip = <forwarder_ip>
If i run the search index=main sourcetype="stream:*" the only events I see are:
{ [-]
endtime: 2020-12-22T12:18:36Z
event_name: netFlowOptions
exporter_ip: <router_ip>
exporter_time: 2020-Dec-22 12:18:36
exporter_uptime: 4273621448
netflow_version: 9
observation_domain_id: 0
seqnumber: 340894
timestamp: 2020-12-22T12:18:36Z
}
and running index=_internal sourcetype="stream:*" host="<forwarder>" gives me two sourcetypes, stream:log and stream:stats. stream:log gives me nothing of interest, just decode errors until the template is received, then these errors stop.
stream:stats shows me:
{ [-]
agentMode: 1
ipAddress: <stream_forwarder_ip>
netflow: { [-]
NetflowDataHandlers: [ [-]
{ [-]
NetflowDecoders: [ [-]
{ [-]
name: Netflow
processedRecords: 210991
}
]
droppedPackets: 0
id: 0
}
]
NetflowReceivers: [ [-]
{ [-]
id: 0
recvdBytes: 8861500
running: true
}
]
eventsIn: 210964
eventsOut: 210964
id: NetflowManager
running: true
}
osName: Linux
senders: [ [-]
{ [-]
busyConnections: 0
configTemplateName:
connections: [ [-]
{ [-]
endpoint: 0.0.0.0:0
id: 0
lastConnect: 2020-12-22T12:15:55.118285Z
numErrors: 5
numSent: 20
queueSize: 0
status: closed
workStatus: idle
}
{ [-]
endpoint: 0.0.0.0:0
id: 1
lastConnect: 2020-12-22T12:14:54.193007Z
numErrors: 4
numSent: 27
queueSize: 0
status: closed
workStatus: idle
}
{ [-]
endpoint: 0.0.0.0:0
id: 2
lastConnect: 2020-12-22T12:14:54.200473Z
numErrors: 3
numSent: 20
queueSize: 0
status: closed
workStatus: idle
}
{ [+]
}
{ [+]
}
{ [+]
}
{ [+]
}
{ [+]
}
{ [+]
}
{ [+]
}
]
dateLastUpdated: 1608637900306
encrypted: true
host: <search_head>
id: <some_id>
key:
lastErrorCode: 0
name:
numBytes: 4367915
numErrors: 41
numStreams: 1
openConnections: 0
port: 8000
requestsQueued: 0
requestsSent: 229
running: true
streamForwarderGroups: [ [+]
]
streamForwarderId: <forwarder_fqdn>
streams: [ [-]
{ [-]
bytes: 8016506
bytes_in: 8016506
bytes_out: 0
delta_bytes: 339112
delta_bytes_in: 339112
delta_bytes_out: 0
delta_events: 8924
delta_raw_bytes: 5889905
events: 210964
id: TEST_NETFLOW
raw_bytes: 130470120
stats_only: 0
}
]
}
]
sniffer: { [+]
}
systemType: x86_64
versionNumber: 7.3.0
}
which suggests that netflow receivers are working as expected.
Running a tcpdump on the receiver host I can see that I am receiving genuine netflow v9 which is readable using wireshark.
I've looked at splunkd.log on the indexer and I'm not seeing anything that relates to the stream forwarder. I'm at a loss where to look next. I have gone through the documentation countless times over the last few days to make sure I'm not missing anything.
Any help would be greatly appreciated!
Thanks
It turns out that steam is actually configured correctly. The reason I was only seeing the heartbeats is because there is a delay of 5000+ seconds between event time and index time.
I have the same problem, how do I tune these times?
In my case ran a packet capture between the switches and the Splunk server. Using this Splunk article I calculated the times and Splunk was correct. https://docs.splunk.com/Documentation/StreamApp/7.3.0/DeployStreamApp/UseStreamtoingestNetflowandIPF...
It appeared to be the switches misreporting the time. I was planning to follow it up with the switch vendor, but we went in a different direction so I don't have any further updates, sorry!
It turns out that steam is actually configured correctly. The reason I was only seeing the heartbeats is because there is a delay of 5000+ seconds between event time and index time.