Knowledge Management

Summary index for large data and many group bys

tjago11
Communicator

I'm hoping to get a single summary index query that I can then use to pull data in different ways. I would prefer to roll the data up daily but there are about 150 million events in a day. Normally that wouldn't be an issue but I'm also wanting to group the data by lots of different fields like this:

index=cif
| fields ApplicationName, DataCenter, Environment, ServerType, host, ErrorCode, MessageText, _time
| eval dateOnly = strftime(_time, "%x") 
| fields dateOnly, ApplicationName, DataCenter, Environment, ServerType, host, ErrorCode, MessageText
| fillnull value=""
| stats count as messageCount by dateOnly, ApplicationName, DataCenter, Environment, ServerType, host, ErrorCode, MessageText

The goal is to count the number of times a particular message occurs. On the backside, when this summary is done the user would select the data back like this:

index=summary source=mySource ApplicationName=foo DataCenter=foo Environment=bar ServerType=bar host=*
| stats count as by dateOnly

On retrieval the user will know the various filter fields which is a much smaller set of data. So if I group by the filter fields when building the summary index then I can use them to filter later. I like that this gets me a single summary index job but the query takes like 2.5 hours to complete.

Am I better off running more summary jobs and filtering up front?? Will mean more Summary Index sources and more jobs, which is annoying but maybe necessary?? Thanks.

0 Karma
1 Solution

Vijeta
Influencer

What is the frequency of your summary report? If its daily you can schedule it twice a day for a window of 12 hours or may be every hour depending on the data.
Also based on ApplicationName you can create separate summary reports collecting data into same summary index and different source name(source name will be named of your scheduled report), and later when you search in the query you can use index and source name in your query for particular application.

Thanks
Vijeta

View solution in original post

0 Karma

Vijeta
Influencer

What is the frequency of your summary report? If its daily you can schedule it twice a day for a window of 12 hours or may be every hour depending on the data.
Also based on ApplicationName you can create separate summary reports collecting data into same summary index and different source name(source name will be named of your scheduled report), and later when you search in the query you can use index and source name in your query for particular application.

Thanks
Vijeta

0 Karma

tjago11
Communicator

I think your suggestion to split up the data by application and run separate jobs and separate sources is a good option. Thanks.

0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...

Unlock What’s Next: The Splunk Cloud Platform at .conf25

In just a few days, Boston will be buzzing as the Splunk team and thousands of community members come together ...