Getting Data In: Forwarders - Wed 5/22/24

Community Office Hours

Getting Data In: Forwarders - Wed 5/22/24

1 Comment
Cover Images - Office Hours (12) copy.png
Published on ‎03-25-2024 12:01 PM by Splunk Employee | Updated on ‎05-28-2024 04:15 PM

Register here. This thread is for the Community Office Hours session on Getting Data In (GDI): Forwarders on Wed, May 22, 2024 at 1pm PT / 4pm ET.

 

This is your opportunity to ask questions related to getting data into Splunk Platform using forwarders. Including:

  • Universal Forwarder (UF) or heavy forwarder (HF) deployment/configuration
  • Troubleshooting forwarder connectivity issues, blocked queues, etc.
  • Improving forwarder performance
  • Anything else you’d like to learn!

 

Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here)

 

Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.

 

Look forward to connecting!



0 Karma
adepp
Splunk Employee

Here are a few questions from the session (get the full Q&A deck and live recording in the #office-hours Slack channel):

 

Q1: Seeing bottlenecks in forwarder getting data to Splunk Cloud, should output be pointed to multiple ports?

  • There could be multiple reasons for a bottleneck but multiple ports is not the answer. 
  • I would first be checking outputs.conf to make sure the throughput is set accordingly. MaxKBps defaults to 256 but this can be increased in increments or set to unlimited. 
  • Assuming the bottleneck is throughput, splitting the data across multiple Heavy Forwarders may help.
  • Troubleshooting guide can be found Here
  • Further reading can be found Here

Q2: Any light weight forwarders for IoT use cases? or Edge Processor is the one?

  • Depending on the IoT configuration and use case, we would recommend either sending data to a Syslog receiver in Edge Processor, or traditional Syslog-ng, or if you are wanting to collect directly from sensors, controllers, etc. and sending to Splunk you can use the Splunk Edge Hub, a physical appliance purpose-built for these use cases.
  • Splunk Edge Hub Central link
  • Edge Processor Syslog configuration documentation

Q3: What are the strategies for Windows Services monitoring and hang scenarios

  • We typically recommend the Splunk Add-on for Microsoft Windows as it ingests data according to CIM and will integrate well with advanced use cases and existing applications on Splunk Platform.
  • For Splunk Observability, you can use a set of different receivers on the OpenTelemetry Collector such as the windowsperfcounters receiver, the windowseventlog receiver. Additionally you can look at the contrib receivers such as iisreceiver or the activedirectoryds receiver.
  • Windows add-on here
  • Windows best practices doc
  • OpenTelemetry receivers docs here and here

 

Other Questions (check the #office-hours Slack channel for responses):

  • Are universal forwarders the way forward, and when should HF be used instead? For old OS like Win 7 anyone tried newer versions?
  • How to filter events from logs
  • Would like to know more about input filtering at Universal Forwarder
  • REST APIs dashboard against IISLogs (already fwd) would like to identify obsolete endpoints (ep) and daily usage per endpoint
  • From a PCI v4.0 perspective, what are the required security and operational logs that have to be ingested and analyzed in Splunk
  • Demo on upgrading forwarders from deployment servers
  • Cisco SNA/Stealthwatch integration
  • Input phase transformations at forwarder
  • Practical applications for SIEM & SOAR technologies
  • I am experiencing an issue while using rsyslog for logging.  When I read the data on the port using tcpdump I can see that the data coming in is clean.  When I read the data that is written from that port to the flat file the file has a bunch of brackets [ ] inserted through the data that was not there.  This causes the Cyberark Add-On to not extracts some of the fields correctly and as a result some of the field has the bracket inserted in some of the field values which causes the logs to not be normalized or extracted property for usage in monitoring.  I found an article that states that this is a known issue but need to build a test Syslog host to test the solution.  Need guidance on how to build that so I can test the solution.  UF is installed on server where syslog is writing data.  Any thoughts?