- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

In the fundamentals 1 course lab 8 tells us to:
"As a best practice and for best performance, place dedup as early in the search as possible." (page 4)
But the quick refence guide tells us that:
"Postpone commands that process over the entire result set (non-streaming commands) as late as possible in your search. Some of these commands are: dedup, sort, and stats" (page2)
the example command they give in lab 8 places dedup in front of the distributable streaming command 'rename':
index=main sourcetype="access_combined_wcookie" action=purchase status=200 file="success.do"
| dedup JSESSIONID
| table JSESSIONID, action, status
| rename JSESSIONID as UserSessions
Would it not make sense to place dedup after rename? I guess 'as early as possible' is ambiguous anyways, but any input on where to place dedup would be greatly appreciated,
Cheers,
Roelof
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

The best way to tackle the above query is
index=main sourcetype="access_combined_wcookie" action=purchase status=200 file="success.do"
| stats count by JSESSIONID, action, status
| rename JSESSIONID as UserSessions
stats
or dedup
is much efficient and reduce the data as much as possible before you do field level manipulations
you do a statistical reduction as early as possible in your search
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

The best way to tackle the above query is
index=main sourcetype="access_combined_wcookie" action=purchase status=200 file="success.do"
| stats count by JSESSIONID, action, status
| rename JSESSIONID as UserSessions
stats
or dedup
is much efficient and reduce the data as much as possible before you do field level manipulations
you do a statistical reduction as early as possible in your search
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I believe this answer is not quite correct. The optimized query is:
index=main sourcetype="access_combined_wcookie" action=purchase status=200 file="success.do" | table JSESSIONID, action, status
| stats count by JSESSIONID, action, status
| rename JSESSIONID as UserSessions
In a clustered Splunk environment, lines 1-2 execute in parallel on your indexers, the minimized data is then passed to the searchhead, and the searchhead executes line 3, and then line 4 only operates on 1 row of data.
I try to always do a TABLE early in the qeury especially before doing an expensive DEDUP, STATS, or BIN. That reduces the dataset on all your indexers, discarding unneeded fields, before it's merged on your searchead. Instead of TABLE you could alternately do two FIELDS commands, one to include the necessary fields and another to remove _raw. Computationally I don't know whether Splunk is more efficient handling event data from FIELDS or handling transformed data from TABLE, but TABLE makes the query simpler.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

You have messed with table and fields. You should never use table before stats etc. As table always move processing into SH side. So you should do
index=main sourcetype="access_combined_wcookie" action=purchase status=200 file="success.do"
| fields JSESSIONID, action, status
| stats count by JSESSIONID, action, status
| rename JSESSIONID as UserSessions
This can use several indexers to do preliminary phase of stats, then send smaller result sets to SH which finally combine those to give true result of stats.
- Mark as New
- Bookmark Message
- Subscribe to Message
- Mute Message
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content

Hi Koshyk,
Thank you for the quick reply, just a follow up: this means that if I rename before stats or dedup it would take more time? And this would be the case since it is renaming over a larger dataset than if it was excuted after stats/dedup?
