Knowledge Management

Best practice for working with large dataset

mblauw
Path Finder

I've got a very large dataset which has got 50 M events each month. I've currently got 3 months indexed, so approx 150M events.

Now, when I try to build up an accelerated report it still has got nearly 2M events. This is the least possible as I still need to use quite a lot of files that have unique combinations in the data.

What would be best practice to build up a search on this data? Searches have to be over all-time data.

We also have built this in ES. Here, it only takes about 1 minute to show a result over the full 3 months worth of data.

0 Karma

sloshburch
Splunk Employee
Splunk Employee

You mentioned an accelerated report. Did you also try creating a Data Model or Data Set and then accelerating that?

0 Karma

mblauw
Path Finder

No I did not yet tried to attempt that. Has those got way better performance over large datasets?

0 Karma

sloshburch
Splunk Employee
Splunk Employee

Absolutely! Search Acceleration is great for very specific searches while Datasets are more malleable and allow for a wider variety of data analysis against the data and you can even accelerate them as well!

http://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Aboutdatasets

0 Karma

mblauw
Path Finder

Unfortunately this also did not had the desired result in terms of search speed.. Now trying to build up a summary index, see how well that goes.

Thank you for your input!

0 Karma

sloshburch
Splunk Employee
Splunk Employee

In terms of search speed, can you confirm that you accelerated the data model after creating it? Between that and the 'tstats' command, you should see an amazing different. You would see no difference if no such acceleration was turned on.

0 Karma

mblauw
Path Finder

Thank you! I found out I only had an accelerated dataset, not an accelerated datamodel. It is building atm. Will see how that goes.

0 Karma

mblauw
Path Finder

I've build the datamodel. It shows the following and the tstats command is not working...

ACCELERATION Rebuild Update Edit

Status 100.00% Completed

Access Count 4.
Last Access: 5/12/17 12:11:53.000 PM
Size on Disk 3059.04 MB
Summary Range 0 second(s)
Buckets 2
Updated 5/12/17 12:01:46.000 PM

0 Karma

sloshburch
Splunk Employee
Splunk Employee

When you pivot on the dataset does it work faster than before? Also, show us what of tstats is not working - it's likely just syntax related.

0 Karma

mblauw
Path Finder

Pivot is incredibly fast up untill 113M events, the last 50M events are very slow. The time range for acceleration is set to all time.

0 Karma

sloshburch
Splunk Employee
Splunk Employee

I wonder if the last 50m events were just not yet accelerated.

0 Karma

mblauw
Path Finder

But in Settings> Data Models it says completed: 100%. That's strange then, right?

0 Karma

sloshburch
Splunk Employee
Splunk Employee

"Pivot is incredibly fast up untill 113M events, the last 50M events are very slow." <- is that still the case? Does the Job Inspector tell you anything? I'd try similar searches with tstats using the summariesonly field true and false to see if you can pinpoint more of that 50m part. Also, did you mean the most recent or the earliest (relative to _time) when you mentioned "last"? And does that happen consistently or just that one time? If just that one time then it could have been load on the indexer.

0 Karma

mblauw
Path Finder

I see that I received this error message. Does anybody know what this means?

Audit:[timestamp=05-12-2017
16:40:42.547, user=splunk-system-user,
action=search, info=failed,
search_id='SummaryDirector_1494600038.3739', total_run_time=0.15, event_count=0,
result_count=0, available_count=0,
scan_count=0, drop_count=0,
exec_time=1494600038, api_et=N/A,
api_lt=N/A, search_et=N/A,
search_lt=N/A, is_realtime=0,
savedsearch_name="",
search_startup_time="0",
searched_buckets=0,
eliminated_buckets=0,
considered_events=0, total_slices=0,
decompressed_slices=0][n/a]

0 Karma

sloshburch
Splunk Employee
Splunk Employee

I think the SummaryDirector items are the autogenerated accelerations. Other than 'info=failed' I'm not seeing an error message. Where did this arise? If its the acceleration then I think Splunk will self-correct.

0 Karma

mblauw
Path Finder

Will try that tomorrow. I'm now rebuilding the datamodel acceleration. Hoping that will fix some things..

This is the query I used:

| tstats avg(t_0_10s) FROM datamodel=ndw_acc_datamodel

I've got a datamodel called ndw_acc_datamodel, wherein a dataset (root event) lives which is called ndw_acc_datamodel_set1

0 Karma

sloshburch
Splunk Employee
Splunk Employee

What of this tstats command is not working - is it showing an error message or producing no results?

Try | tstats values FROM datamodel=ndw_acc_datamodel to validate t_0_10s is the right field name.

According to the docs, you can use the summariesonly flag to restrict it to only items accelerated thus far (if you desire). Also, the prestats will return the data like if you had used the summary indexing commands - ready for other stats after with more of the original summarization details.

http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Tstats

0 Karma

mblauw
Path Finder

Your query returns no results..

0 Karma

woodcock
Esteemed Legend

If it is possible, rollup the distinctives into whatever breakout timespans you need, perhaps one hourly, one daily and one monthly and put them into a summary index. Then you can pull out of that instead of the raw data.

Get Updates on the Splunk Community!

Introducing Splunk Enterprise 9.2

WATCH HERE! Watch this Tech Talk to learn about the latest features and enhancements shipped in the new Splunk ...

Adoption of RUM and APM at Splunk

    Unleash the power of Splunk Observability   Watch Now In this can't miss Tech Talk! The Splunk Growth ...

Routing logs with Splunk OTel Collector for Kubernetes

The Splunk Distribution of the OpenTelemetry (OTel) Collector is a product that provides a way to ingest ...