Getting Data In

In RSYSLOG configuration, how to get the UF to send the other log information to the relevant index in our single Splunk instance?

willadams
Contributor

I am trying to see where I have gone wrong with my RSYSLOG configuration and forwarding information for SPLUNK. In our environment we are using SNARE on our end points which is sending the data to a RSYSLOG (Centos) box which runs SPLUNK UF. SPLUNK UF then sends to a single instance of SPLUNK. So far the communications are working. The endpoints are sending to the RSYSLOG box and the RSYSLOG box has been configured to pick up and log events based on a "content match". For example, data coming from "Windows-Security-Auditing" goes to a log file call "windowssecurityevents.log". Data from powershell goes to a log file called "powershell.log". This works fine

I have also configured the SPLUNK UF (inputs.conf) with the following:

[monitor://path/windowssecurityevents.log]
disabled = false
index = winsecevents
sourcetype = winsecevents_syslog

[monitor://path/powershell.log]
disabled = false
index = powershell
sourcetype = powershell_syslog

[monitor://path/OtherLog1.log]
disabled = false
index = OtherLog1
sourcetype = OtherLog1_syslog

[monitor://path/OtherLog2.log]
disabled = false
index = OtherLog2
sourcetype = OtherLog2_syslog

I also ran a command in CentOS to add the forward-server being "./splunk add forward-server 1.1.1.1:9997

I am only able to see logs for 1 log file. The other 3 are not showing in the seperate SPLUNK indexes that I have configured. The log files themselves are growing and I can see my data in RSYSLOG so the problem isn't there. I am trying to see what I need to do with the UF to get it to send the other log information to the relevant index in our single splunk instance.

0 Karma
1 Solution

sudosplunk
Motivator

Try increasing the max_KBps value for thruput in limits.conf for some period of time and see if it increases the performance.

Here is the info per docs:

maxKBps = <integer>
* The maximum speed, in kilobytes per second, that incoming data is 
  processed through the thruput processor in the ingestion pipeline.
* To control the CPU load while indexing, use this setting to throttle
  the number of events this indexer processes to the rate (in
  kilobytes per second) that you specify.
* NOTE:
  * There is no guarantee that the thruput processor 
    will always process less than the number of kilobytes per
    second that you specify with this setting. The status of 
    earlier processing queues in the pipeline can cause
    temporary bursts of network activity that exceed what
    is configured in the setting. 
  * The setting does not limit the amount of data that is 
    written to the network from the tcpoutput processor, such 
    as what happens when a universal forwarder sends data to 
    an indexer.  
  * The thruput processor applies the 'maxKBps' setting for each
    ingestion pipeline. If you configure multiple ingestion
    pipelines, the processor multiplies the 'maxKBps' value
    by the number of ingestion pipelines that you have
    configured.
  * For more information about multiple ingestion pipelines, see 
    the 'parallelIngestionPipelines' setting in the 
    server.conf.spec file.
* Default (Splunk Enterprise): 0 (unlimited)
* Default (Splunk Universal Forwarder): 256

View solution in original post

0 Karma

sudosplunk
Motivator

Try increasing the max_KBps value for thruput in limits.conf for some period of time and see if it increases the performance.

Here is the info per docs:

maxKBps = <integer>
* The maximum speed, in kilobytes per second, that incoming data is 
  processed through the thruput processor in the ingestion pipeline.
* To control the CPU load while indexing, use this setting to throttle
  the number of events this indexer processes to the rate (in
  kilobytes per second) that you specify.
* NOTE:
  * There is no guarantee that the thruput processor 
    will always process less than the number of kilobytes per
    second that you specify with this setting. The status of 
    earlier processing queues in the pipeline can cause
    temporary bursts of network activity that exceed what
    is configured in the setting. 
  * The setting does not limit the amount of data that is 
    written to the network from the tcpoutput processor, such 
    as what happens when a universal forwarder sends data to 
    an indexer.  
  * The thruput processor applies the 'maxKBps' setting for each
    ingestion pipeline. If you configure multiple ingestion
    pipelines, the processor multiplies the 'maxKBps' value
    by the number of ingestion pipelines that you have
    configured.
  * For more information about multiple ingestion pipelines, see 
    the 'parallelIngestionPipelines' setting in the 
    server.conf.spec file.
* Default (Splunk Enterprise): 0 (unlimited)
* Default (Splunk Universal Forwarder): 256
0 Karma

willadams
Contributor

nittala_surya answered this question.

The problem was related to maxKBps and UF throttling my connection. Once value set to 0, all logs started to flow as required.

0 Karma

willadams
Contributor

The problem was related to maxKBps. I also discussed this with a SPLUNK contact and found that splunkd.log was showing max throughput and UF was throttling the connection. I have now set the maxKBps value to 0 and all the logs are now flowing correctly.

0 Karma

willadams
Contributor

I realised that there was some delay obviously with the logs getting in. I checked the events again and found that it is now populating. Maybe this was related to the size of the files....?

0 Karma

willadams
Contributor

I am seeing the log files grow but it seems that the SPLUNK UF takes a while to send log information. Is there anyway to potentially speed this up?

0 Karma
Get Updates on the Splunk Community!

New Case Study Shows the Value of Partnering with Splunk Academic Alliance

The University of Nevada, Las Vegas (UNLV) is another premier research institution helping to shape the next ...

How to Monitor Google Kubernetes Engine (GKE)

We’ve looked at how to integrate Kubernetes environments with Splunk Observability Cloud, but what about ...

Index This | How can you make 45 using only 4?

October 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with this ...