I am running Splunk version 6.1.3 creating a dashboard (form) in simple XML using post process. I appear to be losing fields in the search post process. If I run the search in the dashboard, it is supposed to populate a dashboard table with about 20 fields (explicityly defined using table command). In the dashboard view, it only appears to get 5 of the fields not even displaying the column headers, yet if I "open in search" from that dashboard panel, it will open the search and show all the fields that I am specifying in the table? There is no obvious filtering in the search template. The search template does generate a couple hundred fields, is there a limit? I am confident that all the fields exceed the 1% set minimum.
Any help would be appreciated.
Searches within dashboards are run in "fast mode", which means that only those fields that are explicitly referenced within the search are available as selected fields in the result set.
To ensure that you pass the necessary fields from a global search down to the post process searches, you will need to use the "| fields" command with the list of fields needed. As a shortcut, you can also use "| fields *" to pass all fields as selected fields.
So your search could look something like this:
<searchTemplate>index=_internal log_level="ERROR" | fields *</searchTemplate>
Here's the explanation:
"Sometimes you end up with a dashboard running lots of different searches but they all seem annoyingly similar.
One advanced technique is to run a single search, then use 'postProcess' to take the data in N different directions for N different charts.
Note: Read carefully. If set up improperly your results can be misleading.
It's tempting to have your base search just be the 'events' part of the search, and then have your postProcess module's each have different reports. like "timechart sum(kb) by series", or "chart avg(eps) over series".
However that can get you into trouble because splunk doesnt do unnecessary work, and if the search contains no indication that anyone wants statistics for a given field, it wont collect them. Or what's almost worse, it might collect incomplete statistics. In the end you might find that your postProcess always seems to return 0 results, or it seems to return results that on closer inspection are not correct. (for the advanced reader, the naive approach also breaks map-reduce a bit.)
The solution is to use the stats command in your base search. Stats will do all the work and get what Sorkin (aka the Sorkinator) calls the 'sufficient statistics'. Then later your postProcess searches will have all the raw materials they need.
Specifically, the search has these clauses on the end:
| bin _time span=5min | stats count by series, eps, kb, kbps, _time
The stats count with the various group-by clauses is the important part. The bin command is further optimizing our base search so that we dont have one row per timestamp, but one aggregate row per 5 minute bucket. Check out all the stuff on this page that we're able to do from just one search.
read through the XML source for this view in Manager to see how it works for yourself."
I am not using _time bins. I believe there must be a limitation on the number of fields allowed to pass from search template to post process. If I explicitly define the fields in the search teamplate, they are showing up. I am adding 15 post process searches/panels in the dashboard. I am explicitly defining the used fields in the search template as I go. So far I haven't hit the limit. This may be a work around. If I hit a limit, I will document it.