Getting Data In

warning and violations, how to reduce my indexed volume ?

mataharry
Communicator

Hi
I have a license pool for X Gb per day, and I blow it every almost every single day.
How to selectively reduce my indexing volume ?

1 Solution

yannK
Splunk Employee
Splunk Employee

Hi Mata

The options are simple :

  • reduce the indexed volume.
  • or get a license volume upgrade (contact splunk sales)

For the first option here are the steps :

1 - Analyze you data, to identify where the volume it is coming from.
in 4.2+ you can use those searches on the license-master
see http://wiki.splunk.com/Community:TroubleshootingIndexedDataVolume
if you prefer detail, you can add details on the source "s", host "h", sourcetype "st", indexer "i".

total per pool index=_internal source=*license_usage.log type=Usage | eval GB=b/1024/1024/1024 | timechart span=1d sum(GB) by pool

detail per sourcetype
index=_internal source=*license_usage.log type=Usage | eval GB=b/1024/1024/1024 | timechart span=1d sum(GB) by st useother=false

detail per source
index=_internal source=*license_usage.log type=Usage | eval GB=b/1024/1024/1024 | timechart span=1d sum(GB) by s useother=false

detail per host
index=_internal source=*license_usage.log type=Usage | eval GB=b/1024/1024/1024 | timechart span=1d sum(GB) by h useother=false

2- If some forwarders are not necessary, turn splunk forwarder off on those boxes.
Why did you deployed a forwarder on every single box in the first place !!!

3- If some useless files are being indexed, be more selective.
Disable the inputs, or use whitelist/blacklists to limit the scope
example to drop the core files, or to index only .log files:
`[montitor:///var/log]
blacklist=.core$
[monitor:///mypath/
.log]
`

4 - If some servers are sending to much data (syslog by example)
disable the routing to splunk, or select the components to send.
example on syslog.conf (send only critical and errors, and every event from my application)

*.CRITICAL splunk.mydomain.com
*.ERROR splunk.mydomain.com
myapplication.* splunk.mydomain.com

5 - If some log files contains too much data, change the verbosity level of your applications (by example, avoid the DEBUG mode)

6- Search for duplicates events in the logs, please check they exists in the original logs, or if the same log file is being indexed several times (some log rotation may cause that)
here are searches to find duplicates in splunk :
* | eval raw=_raw | convert ctime(_indextime) as indextime
| stats count first(indextime) as first last(indextime) as last by raw | where count > 1 | table count first last raw

Then drilldown to the source to figure.

7 - If your cannot disable an input but don't need all the events, you can setup a NULLQUEUE filtering of the events.
This has to be setup on the indexers (or heavy forwarders)
(with windows eventlogs, we usually use filtering on the eventcode)

see examples http://docs.splunk.com/Documentation/Splunk/4.3/Deploy/Routeandfilterdatad#Discard_specific_events_a...

  • Discard specific events and keep the rest
  • or Keep specific events and discard the rest

View solution in original post

yannK
Splunk Employee
Splunk Employee

Details are on this wiki page : http://wiki.splunk.com/Community:TroubleshootingIndexedDataVolume

remark :
License_usage.log is available in the Splunk license master instance only. A license master logs indexed events volume every minute by the information the slaves send to the master. A slave maintains a table of how much you've indexed on a slave in chunks of time. Typically that chunk of time is 1 minute, but the chunk may grow if the slave cannot contact the master -- Splunk only resets the chunk when the table is sent to the master. The table is of src,srctype,host tuples… if that table grows to exceed 1000 entries, then Splunk squashes the host/source keys.0 So, if you have more than 1000 different tuple entries, you find no value for h(ost) and s(ource) fields. Splunk never suppresses st(sourcetype) in the log.

yannK
Splunk Employee
Splunk Employee

Hi Mata

The options are simple :

  • reduce the indexed volume.
  • or get a license volume upgrade (contact splunk sales)

For the first option here are the steps :

1 - Analyze you data, to identify where the volume it is coming from.
in 4.2+ you can use those searches on the license-master
see http://wiki.splunk.com/Community:TroubleshootingIndexedDataVolume
if you prefer detail, you can add details on the source "s", host "h", sourcetype "st", indexer "i".

total per pool index=_internal source=*license_usage.log type=Usage | eval GB=b/1024/1024/1024 | timechart span=1d sum(GB) by pool

detail per sourcetype
index=_internal source=*license_usage.log type=Usage | eval GB=b/1024/1024/1024 | timechart span=1d sum(GB) by st useother=false

detail per source
index=_internal source=*license_usage.log type=Usage | eval GB=b/1024/1024/1024 | timechart span=1d sum(GB) by s useother=false

detail per host
index=_internal source=*license_usage.log type=Usage | eval GB=b/1024/1024/1024 | timechart span=1d sum(GB) by h useother=false

2- If some forwarders are not necessary, turn splunk forwarder off on those boxes.
Why did you deployed a forwarder on every single box in the first place !!!

3- If some useless files are being indexed, be more selective.
Disable the inputs, or use whitelist/blacklists to limit the scope
example to drop the core files, or to index only .log files:
`[montitor:///var/log]
blacklist=.core$
[monitor:///mypath/
.log]
`

4 - If some servers are sending to much data (syslog by example)
disable the routing to splunk, or select the components to send.
example on syslog.conf (send only critical and errors, and every event from my application)

*.CRITICAL splunk.mydomain.com
*.ERROR splunk.mydomain.com
myapplication.* splunk.mydomain.com

5 - If some log files contains too much data, change the verbosity level of your applications (by example, avoid the DEBUG mode)

6- Search for duplicates events in the logs, please check they exists in the original logs, or if the same log file is being indexed several times (some log rotation may cause that)
here are searches to find duplicates in splunk :
* | eval raw=_raw | convert ctime(_indextime) as indextime
| stats count first(indextime) as first last(indextime) as last by raw | where count > 1 | table count first last raw

Then drilldown to the source to figure.

7 - If your cannot disable an input but don't need all the events, you can setup a NULLQUEUE filtering of the events.
This has to be setup on the indexers (or heavy forwarders)
(with windows eventlogs, we usually use filtering on the eventcode)

see examples http://docs.splunk.com/Documentation/Splunk/4.3/Deploy/Routeandfilterdatad#Discard_specific_events_a...

  • Discard specific events and keep the rest
  • or Keep specific events and discard the rest

rjyetter
Path Finder

Enable the Splunk Deployment Monitor app and see which host/source is sending the most data - decide on the value of that data and then disable it.

Get Updates on the Splunk Community!

Earn a $35 Gift Card for Answering our Splunk Admins & App Developer Survey

Survey for Splunk Admins and App Developers is open now! | Earn a $35 gift card!      Hello there,  Splunk ...

Continuing Innovation & New Integrations Unlock Full Stack Observability For Your ...

You’ve probably heard the latest about AppDynamics joining the Splunk Observability portfolio, deepening our ...

Monitoring Amazon Elastic Kubernetes Service (EKS)

As we’ve seen, integrating Kubernetes environments with Splunk Observability Cloud is a quick and easy way to ...