All Apps and Add-ons

Cisco Secure eStreamer Client Add-On for Splunk | Logs Filling Up

jmr
Explorer

I am noticing that the eStreamer Client Add-On is generating a lot of logs and filling up my Enterprise Server.

Is there any way to mitigate this?

It looks like the app will write 1000 lines of code per file. Is there any way to set an overwrite or scavenge setting so it doesn't just keep filling up the disk infinitely?

 

root@thall-splunk02:/opt/splunk/etc/apps/TA-eStreamer/bin/encore/data/splunk# du -sh /opt/splunk/etc/apps/TA-eStreamer/bin/encore/data/splunk/
87G /opt/splunk/etc/apps/TA-eStreamer/bin/encore/data/splunk/
root@thall-splunk02:/opt/splunk/etc/apps/TA-eStreamer/bin/encore/data/splunk# ll
total 90725612
drwx--x--- 2 root root 151552 Sep 18 02:06 ./
drwx--x--- 3 root root 4096 Sep 13 12:18 ../
-rw------- 1 root root 27279093 Sep 13 12:18 encore.1694621917.log
-rw------- 1 root root 28232829 Sep 13 12:18 encore.1694621924.log
-rw------- 1 root root 28304921 Sep 13 12:18 encore.1694621930.log
-rw------- 1 root root 28368804 Sep 13 12:19 en ...

 

 

wc -l encore.1694630328.log
10000 encore.1694630328.log

Labels (2)
0 Karma
1 Solution

jmr
Explorer

Official answer:

 

We have a default clean up script but it is by no means a full solution, it will age off files that are older than some frequency (in the splencore.sh) script, but if you have a high volume that threshold may not be acceptable.  There are a few options we recommend to our clients:

 

(#1)

In inputs.conf:

 

Change the monitor stanza to batch, this will delete files upon ingest to Splunk, this is useful if Splunk is only system of record.

 

 

(#2)

 

Sym Link to a NAS drive or larger file system:

 

If you want to retain the estreamer log files then you could create a sym link to the folder where output is stored, the sym link would need to represent an adequate file capacity, something on the order of /var/log 

 

(#3)

 

Modify the age off task, its default is 12 hours, but that can be modified, once modified though you will need to update it with new versions of the TA.  This is located in the /opt/splunk/etc/apps/TA-eStreamer/bin/splencore.sh file, note modifying this file will potentially conflict with future updates of the app, so keep in mind during and upgrade you will need to go back and modify this file after an overwrite

 

clean() {

    # Delete data older than 12 hours -> 720mins



View solution in original post

0 Karma

jmr
Explorer

Official answer:

 

We have a default clean up script but it is by no means a full solution, it will age off files that are older than some frequency (in the splencore.sh) script, but if you have a high volume that threshold may not be acceptable.  There are a few options we recommend to our clients:

 

(#1)

In inputs.conf:

 

Change the monitor stanza to batch, this will delete files upon ingest to Splunk, this is useful if Splunk is only system of record.

 

 

(#2)

 

Sym Link to a NAS drive or larger file system:

 

If you want to retain the estreamer log files then you could create a sym link to the folder where output is stored, the sym link would need to represent an adequate file capacity, something on the order of /var/log 

 

(#3)

 

Modify the age off task, its default is 12 hours, but that can be modified, once modified though you will need to update it with new versions of the TA.  This is located in the /opt/splunk/etc/apps/TA-eStreamer/bin/splencore.sh file, note modifying this file will potentially conflict with future updates of the app, so keep in mind during and upgrade you will need to go back and modify this file after an overwrite

 

clean() {

    # Delete data older than 12 hours -> 720mins



0 Karma
Get Updates on the Splunk Community!

Observability | How to Think About Instrumentation Overhead (White Paper)

Novice observability practitioners are often overly obsessed with performance. They might approach ...

Cloud Platform | Get Resiliency in the Cloud Event (Register Now!)

IDC Report: Enterprises Gain Higher Efficiency and Resiliency With Migration to Cloud  Today many enterprises ...

The Great Resilience Quest: 10th Leaderboard Update

The tenth leaderboard update (11.23-12.05) for The Great Resilience Quest is out >> As our brave ...