Monitoring Splunk

Logs quantity

BRFZ
Path Finder

Hello,

Could you please provide guidance on how to retrieve the daily quantity of logs per host? Specifically, I am looking for a method or query to get the amount of logs generated each day, broken down by host.

Best regards,

0 Karma

PickleRick
SplunkTrust
SplunkTrust

There are license reports mentioned by @gcusello which are quite useful. But they might contain some summarized data. You can search you raw data and calculate it manually

index=whatever
| stats sum(eval(len(_raw))) BY host

Some caveats though

1. It is slow - it must read all events and check their lengths. You can walk around this problem by creating an indexed field containing event's length.

2. It shows the size of searchable data. Won't show the data you don't have acess to (if you're searching through multiple indexes and have rights to only some of them, won't show you the size of "removed" data if someone used | delete, will be probably limited by role's search filters and so on)

3. Be cautious about _time vs. _indextime

gcusello
SplunkTrust
SplunkTrust

Hi @BRFZ ,

you can use the License Usage Report [Settings > Licensing > Usage Report > Previous 30 days >Split by Host] and customize it or the Monitoring Console App tha tgives the same results.

The only limit is the retention time of yur _internal data.

Ciao.

Giuseppe

BRFZ
Path Finder

Hello @gcusello,

Is this not achievable via a search, please?

Best regards,

0 Karma

gcusello
SplunkTrust
SplunkTrust

Hi @BRFZ ,

every dashboard in Splunk is a search, you can Open in Search the panel (using the button with the same name) and see how it's written to modify it, in few words, this:

index=_internal [`set_local_host`] source=*license_usage.log* type="Usage" 
| eval h=if(len(h)=0 OR isnull(h),"(SQUASHED)",h) 
| eval s=if(len(s)=0 OR isnull(s),"(SQUASHED)",s) 
| eval idx=if(len(idx)=0 OR isnull(idx),"(UNKNOWN)",idx) 
| bin _time span=1d 
| stats sum(b) as b by _time, pool, s, st, h, idx   
| timechart span=1d sum(b) AS volumeB by h fixedrange=false  
| join type=outer _time [search 
     index=_internal [`set_local_host`] source=*license_usage.log* type="RolloverSummary" earliest=-30d@d 
     | eval _time=_time - 43200 
     | bin _time span=1d 
     | dedup _time stack 
     | stats sum(stacksz) AS "stack size" by _time] 
| fields - _timediff  
| foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024/1024, 3)]

Ciao.

Giuseppe

dural_yyz
Builder
| tstats count where index=* index!=_* by host

This will only give you a count of events, there is no insight to size of storage requirements.  You can add sourcetype and/or source after the host field if you need more detailed information.

It all depends upon what your specific goals are. 

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...