Splunk Enterprise

Bug report - Cisco add-on slows down the search app when it is activated

franckedery
New Member

Hello,

Cisco add-on v. 2.7.3  slows a lot our Splunk Enterprise production platform when it is activated. The research "index=xxxxx sourcetype=cisco:ios" goes from a few ms on our development platform to more than 1 hour on our production platform.

Do you know if any configuration in the add-on could affect the performances of some operations that could fully depend on the the platform configuration?

 

Thanks a lot for your suggestions!

Labels (1)
0 Karma

deepakc
Builder

 

There are many factors that could cause performance issues in your prod environment that wasn’t in the dev environment, production normally has more data and many other variables to that could cause issues.  

Splunk is a workhorse, it needs CPU/Memory/Disk resources and other factors to be in place.

Things to consider

  1. Has the environment been sized according for production?
  2. Is the disk on fast type of disks SSD etc
  3. Are  there lots of user’s run running the same search and for all time and at the same time?
  4. Do you have indexer clustering or is it Splunk All in One

The Add-ons (TA’s) normally provide parsing and other knowledge objects  and potentially it could impact the environment with regex processing as an example.  The Splunk apps on the other hand have searches and dashboards that could potentially impact with long running searches. But normally it’s down to the Splunk sizing or something based on the environment.

I don’t re-call a TA ever causing performance issues in the PROD environment, but it could happen I guess.

I suggest:

  1. Use the  Monitoring Console for the production environment, this is a good place to start to check the performance issues. Check CPU/Memory on the SH and Indexers first.
  2. Check the Searches Run and Search Memory usage,  using the  MC.
  3. If you removed the TA does it improve and re-install, does it get bad again?
  4. If that all fails then perhaps look at logging a support call.

 

Monitoring Console

https://docs.splunk.com/Documentation/Splunk/9.2.1/DMC/DMCoverview

Splunk Sizing guide

https://lantern.splunk.com/Splunk_Platform/Product_Tips/Administration/Sizing_your_Splunk_architectu...

0 Karma

franckedery
New Member

Hello,

Thanks a lot for your answer. After a few test, the same bug happens when we import 1 day logs (500 Mb) in the debug environment. So that the problem seems to come from the Cisco logs themselves.

We will try to activate / deactivate transformations in the props.conf file (I will start with the lookups) and I will keep the community up-to-date! Do not hesitate to suggest some other action we should take! 

Thanks for your help!

 

 

0 Karma

deepakc
Builder

 Network data can be notorious for sending large volumes of data - where possible filter at source.

 

It’s also worth thing about how you’re sending the network data to Splunk

 

The  better syslog options are:

  1. Splunks free SC4S (container syslog under the hood)
  2. Have a syslog (r-syslog or syslog-ng) server and send the data there, then let a UF to pick up from there, and send it to Splunk.

 

Many people set up TCP/UDP ports on a HF or Splunk Indexers, and this can various implications for large environments (not saying you can't do this) but it’s not ideal for production, but for testing or small environments Ok.

0 Karma
Get Updates on the Splunk Community!

Enter the Splunk Community Dashboard Challenge for Your Chance to Win!

The Splunk Community Dashboard Challenge is underway! This is your chance to showcase your skills in creating ...

.conf24 | Session Scheduler is Live!!

.conf24 is happening June 11 - 14 in Las Vegas, and we are thrilled to announce that the conference catalog ...

Introducing the Splunk Community Dashboard Challenge!

Welcome to Splunk Community Dashboard Challenge! This is your chance to showcase your skills in creating ...