- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sorry for the question, I can't think of a sane & sensible way to get the data out of Splunk in a computationally efficient way:
Our data sources:
- Active directory security events
- CSVs of computer names & categories (e.g. computer*x*:public, computer*y*,private)
We want to look at the people usage of computers, but for only a subset of the computers - eg how many people used "public" computers for each hour over the last month.
Taking each computer name in turn and getting Splunk to search through the AD events looking for logons is very slow (the AD logs are 30GB+ a day). I've thought of creating a summary index with just some of the data but I can't quite figure out the best way of doing this.
Does anyone have any suggestions as to the "right" approach to take for getting these answers?
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Have you tried using Accelerated Data Models & tstats?
http://docs.splunk.com/Documentation/Splunk/6.4.3/Knowledge/Acceleratedatamodels
https://helgeklein.com/blog/2015/10/splunk-accelerated-data-models-part-1/
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Have you tried using Accelerated Data Models & tstats?
http://docs.splunk.com/Documentation/Splunk/6.4.3/Knowledge/Acceleratedatamodels
https://helgeklein.com/blog/2015/10/splunk-accelerated-data-models-part-1/
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Never used data models before but they seem to fit the bill perfectly, thanks!
