Getting Data In

Memory Usage by streamfwd.exe

ashajambagi
Communicator

Hi All,

I have recently deployed Splunk TA Stream on universal forwarder to collect DNS data. Stream App is configured on heavy Forwarder. The universal forwarder is forwarding the data to indexer cluster.

The streamfwd.exe service on DNS server is consuming 1GB of memory. Is it a normal behavior of streamfwd.exe service to use memory in GB?

UF host details : Windows 2012 R2 , Memory : 32 GB , 64bit

Below configurations on Universal Forwarder:

limits.conf

 

 

maxKbps = 4096

 

 

inputs.conf

[streamfwd://streamfwd]
splunk_stream_app_location = https://<HF_IP>:8000/en-us/custom/splunk_app_stream/
disabled = 0
stream_forwarder_id =
sslVerifyServerCert = false

 

Labels (2)
Tags (1)
0 Karma
1 Solution

Richfez
SplunkTrust
SplunkTrust

How busy is your DNS server?

Also, you've limited the maxKbps of the UF to 4 Mb.  If during busy times the DNS entries exceed 4 Mb, then it just buffers it all, and that would use a lot of memory.

If I were you, I'd raise those limits WAY up higher, or remove then completely, and see what change that makes.  Try it at 'maxKbps=0'  (Which is unlimited)

You can always set it back to something less than unlimited after testing proves this solves it or does not solve it.  Frankly, I'd just leave it set to unlimited and build out indexer ingestion if you have to.  The only reasons I can think of to leave it limited is to not fill a small pipe, like a WAN connection that's underprovisioned for what's needed.

 

Happy Splunking,

Rich

View solution in original post

0 Karma

Richfez
SplunkTrust
SplunkTrust

How busy is your DNS server?

Also, you've limited the maxKbps of the UF to 4 Mb.  If during busy times the DNS entries exceed 4 Mb, then it just buffers it all, and that would use a lot of memory.

If I were you, I'd raise those limits WAY up higher, or remove then completely, and see what change that makes.  Try it at 'maxKbps=0'  (Which is unlimited)

You can always set it back to something less than unlimited after testing proves this solves it or does not solve it.  Frankly, I'd just leave it set to unlimited and build out indexer ingestion if you have to.  The only reasons I can think of to leave it limited is to not fill a small pipe, like a WAN connection that's underprovisioned for what's needed.

 

Happy Splunking,

Rich

0 Karma
Get Updates on the Splunk Community!

New in Observability - Improvements to Custom Metrics SLOs, Log Observer Connect & ...

The latest enhancements to the Splunk observability portfolio deliver improved SLO management accuracy, better ...

Improve Data Pipelines Using Splunk Data Management

  Register Now   This Tech Talk will explore the pipeline management offerings Edge Processor and Ingest ...

3-2-1 Go! How Fast Can You Debug Microservices with Observability Cloud?

Register Join this Tech Talk to learn how unique features like Service Centric Views, Tag Spotlight, and ...