Getting Data In

I need a query that shows how much data is being logged into Splunk by each hosts in every hour

Explorer

The reason i need this query is, last month and few days ago we passed our license usage because one server alone generated over 1.3 tb of data by error in one day. If i get some kind of alert or notification that this server is generating this much data, we could have stopped it or figured out the problem. if i can get a query which generate an alert if an host generate unusual amount data within an hour or 2 , that would be helpful

0 Karma
1 Solution

Revered Legend

Give this a try (to be run from License server. You can also use Search Head if your forward internal data from License master to your indexers)

index=_internal sourcetype=splunkd component=LicenseUsage type=Usage 
| bucket span=1h _time 
| stats sum(b) as usage by _time h | rename h as host
| eval usageGB=round(usage/1024/1024/1024,3)
| table _time host usageGB

View solution in original post

0 Karma

Legend

@loureni1 check out the Meta Woot! app from Splunkbase which can do this and much more 🙂

____________________________________________
| makeresults | eval message= "Happy Splunking!!!"
0 Karma

Contributor

This should get you started.

index=_internal source=*license_usage.log type=Usage
| stats sum(b) as bytes by h
| eval MB = round(bytes/1024/1024,1)
| fields h MB
| rename h as host

0 Karma

Explorer

thanks Surat

0 Karma

Revered Legend

Give this a try (to be run from License server. You can also use Search Head if your forward internal data from License master to your indexers)

index=_internal sourcetype=splunkd component=LicenseUsage type=Usage 
| bucket span=1h _time 
| stats sum(b) as usage by _time h | rename h as host
| eval usageGB=round(usage/1024/1024/1024,3)
| table _time host usageGB

View solution in original post

0 Karma

Path Finder

You can easily enhance the above answer to make it an alert. I'd probably just add a where clause and some threshold. In the alert configuration, just alert if the query returns more than 0 rows, which I think is the default.

index=_internal sourcetype=splunkd component=LicenseUsage type=Usage 
| bucket span=1h _time 
| stats sum(b) as usage by _time h | rename h as host
| eval usageGB=round(usage/1024/1024/1024,3)
| table _time host usageGB
| where usageGB >= 20

You can set your alert to run hourly and check the last hour. This will only return a row if a host is using 20 or more GB an hour. If this query does return a row, the alert will fire.

If you really do need the alert to only detect spikes and not a fixed threshold, that is very doable but you probably want to create a separate question for that. I think your original question was answered.

Explorer

Thank you ...this is very helpful. I was actualy looking for an Alert

0 Karma

Explorer

Thank you for this query. yes it gives the hourly usage of data by each host. I am still figuring out about the query of which shows from each server how much data is being generated every hour and it should generate the alert when it exceeded  certain percentage
For example: if host ABCD is sending Splunk 10GB of data 1st hour, 2nd hour it send 15GB and and in 3rd hour if it send 50GB that is a big spike in data and when this kind of spike occurs I want to be alerted .

0 Karma
State of Splunk Careers

Access the Splunk Careers Report to see real data that shows how Splunk mastery increases your value and job satisfaction.

Find out what your skills are worth!