Knowledge Management

Best practice for working with large dataset

Path Finder

I've got a very large dataset which has got 50 M events each month. I've currently got 3 months indexed, so approx 150M events.

Now, when I try to build up an accelerated report it still has got nearly 2M events. This is the least possible as I still need to use quite a lot of files that have unique combinations in the data.

What would be best practice to build up a search on this data? Searches have to be over all-time data.

We also have built this in ES. Here, it only takes about 1 minute to show a result over the full 3 months worth of data.

0 Karma

Ultra Champion

You mentioned an accelerated report. Did you also try creating a Data Model or Data Set and then accelerating that?

0 Karma

Path Finder

No I did not yet tried to attempt that. Has those got way better performance over large datasets?

0 Karma

Ultra Champion

Absolutely! Search Acceleration is great for very specific searches while Datasets are more malleable and allow for a wider variety of data analysis against the data and you can even accelerate them as well!

http://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Aboutdatasets

0 Karma

Path Finder

Unfortunately this also did not had the desired result in terms of search speed.. Now trying to build up a summary index, see how well that goes.

Thank you for your input!

0 Karma

Ultra Champion

In terms of search speed, can you confirm that you accelerated the data model after creating it? Between that and the 'tstats' command, you should see an amazing different. You would see no difference if no such acceleration was turned on.

0 Karma

Path Finder

Thank you! I found out I only had an accelerated dataset, not an accelerated datamodel. It is building atm. Will see how that goes.

0 Karma

Path Finder

I've build the datamodel. It shows the following and the tstats command is not working...

ACCELERATION Rebuild Update Edit

Status 100.00% Completed

Access Count 4.
Last Access: 5/12/17 12:11:53.000 PM
Size on Disk 3059.04 MB
Summary Range 0 second(s)
Buckets 2
Updated 5/12/17 12:01:46.000 PM

0 Karma

Ultra Champion

When you pivot on the dataset does it work faster than before? Also, show us what of tstats is not working - it's likely just syntax related.

0 Karma

Path Finder

Pivot is incredibly fast up untill 113M events, the last 50M events are very slow. The time range for acceleration is set to all time.

0 Karma

Ultra Champion

I wonder if the last 50m events were just not yet accelerated.

0 Karma

Path Finder

But in Settings> Data Models it says completed: 100%. That's strange then, right?

0 Karma

Ultra Champion

"Pivot is incredibly fast up untill 113M events, the last 50M events are very slow." <- is that still the case? Does the Job Inspector tell you anything? I'd try similar searches with tstats using the summariesonly field true and false to see if you can pinpoint more of that 50m part. Also, did you mean the most recent or the earliest (relative to _time) when you mentioned "last"? And does that happen consistently or just that one time? If just that one time then it could have been load on the indexer.

0 Karma

Path Finder

I see that I received this error message. Does anybody know what this means?

Audit:[timestamp=05-12-2017
16:40:42.547, user=splunk-system-user,
action=search, info=failed,
search_id='SummaryDirector_1494600038.3739', total_run_time=0.15, event_count=0,
result_count=0, available_count=0,
scan_count=0, drop_count=0,
exec_time=1494600038, api_et=N/A,
api_lt=N/A, search_et=N/A,
search_lt=N/A, is_realtime=0,
savedsearch_name="",
search_startup_time="0",
searched_buckets=0,
eliminated_buckets=0,
considered_events=0, total_slices=0,
decompressed_slices=0][n/a]

0 Karma

Ultra Champion

I think the SummaryDirector items are the autogenerated accelerations. Other than 'info=failed' I'm not seeing an error message. Where did this arise? If its the acceleration then I think Splunk will self-correct.

0 Karma

Path Finder

Will try that tomorrow. I'm now rebuilding the datamodel acceleration. Hoping that will fix some things..

This is the query I used:

| tstats avg(t_0_10s) FROM datamodel=ndw_acc_datamodel

I've got a datamodel called ndw_acc_datamodel, wherein a dataset (root event) lives which is called ndw_acc_datamodel_set1

0 Karma

Ultra Champion

What of this tstats command is not working - is it showing an error message or producing no results?

Try | tstats values FROM datamodel=ndw_acc_datamodel to validate t_0_10s is the right field name.

According to the docs, you can use the summariesonly flag to restrict it to only items accelerated thus far (if you desire). Also, the prestats will return the data like if you had used the summary indexing commands - ready for other stats after with more of the original summarization details.

http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Tstats

0 Karma

Path Finder

Your query returns no results..

0 Karma

Esteemed Legend

If it is possible, rollup the distinctives into whatever breakout timespans you need, perhaps one hourly, one daily and one monthly and put them into a summary index. Then you can pull out of that instead of the raw data.