I have simple datamodel, which I am using as query and want to plot time chart series. Now I am not able to plot anything using timechart, but can plot from chart <> by _time. I don't know what I am missing, due to this I am not able to that in dashboard.
| pivot DataModel_AccessService perf count(TPS) AS "tps" sum(execTime) AS "execTime" SPLITROW _time AS _time PERIOD AUTO SPLITROW host AS hostname
This query is working
| pivot DataModel_AccessService perf count(TPS) AS "tps" sum(execTime) AS "execTime" SPLITROW _time AS _time PERIOD AUTO SPLITROW host AS hostname| chart sum(execTime) by _time
This query is not working
| pivot DataModel_AccessService perf count(TPS) AS "tps" sum(execTime) AS "execTime" SPLITROW _time AS _time PERIOD AUTO SPLITROW host AS hostname| timechart sum(execTime)
Please help me out.
I use the following eval after the pivot to convert the human readable times back to raw time values on 6.2.5 Splunk Enterprise and it works fine.
| pivot DataModel_AccessService perf count(TPS) AS "tps" sum(execTime) AS "execTime" SPLITROW _time AS _time PERIOD AUTO SPLITROW host AS hostname | eval _time = strptime(_time, "%Y-%m-%dT%H:%M:%S.%3N%z") | timechart sum(execTime)
It looks like Splunk Enterprise 6.3.1 fixes this issue. The time values are not automatically converted to human readable form on my 6.3.1 instance.
To be brief... you are kind of mixing metaphors. The Pivot interface (and underlying language) is meant to be used to create a chart without using the SPL (Search Processing Language). timechart
is an SPL command that has an automatic x-axis of _time and the PERIOD is governed by the span directive. You can't reproduce that exactly in PIVOT.
If you really want a traditional timechart built against your datamodel, you want the |datamodel
command. That enables you to access a datamodel, but use the SPL.
http://docs.splunk.com/Documentation/Splunk/6.2.2/SearchReference/Datamodel
If I am not mistaken the datamodel
command is not accelerated even if the underlying data model is. That makes it irrelevant.
Helge, it might be irrelevant to you as you have the need to use an accelerated datamodel, but not to the original questioner. Point being, PIVOT is not the correct method. You might want to open a new question specific to your needs.
this is way how I have handle it
| tstats sum(DAC.time_ms) as agent_time_ms FROM datamodel=DataModel_DAC groupby _time|timechart span=10m avg(agent_time_ms) as agent_time_ms
let me know if you have questions, though this command will work only when data model is accelerated.
I am avoiding tstats because it does not support realtime searches.
I have the same issue. It is probably a bug 😞
It does seem to be a bug, it looks like if you use SPLITROW more than once you lose the ability to run timechart afterwards.
High level, what are you trying to accomplish with this query? I might be able to suggest another way. For example, if all you're after is a the sum of execTime over time then this should do it:
| pivot DataModel_AccessService perf sum(execTime) AS "execTime" SPLITROW _time AS _time PERIOD AUTO
The pivot command will actually use timechart under the hood when it can.
I want to use timechart's ability to accept a dynamic span depending on time range searched by way of a macro. That is not possible with pivot.
My solution is to do a pivot with the minimum period of "minute" followed by a timechart with dynamic span. As you wrote, that fails as soon as a second splitrow is used.
I see. What are you using the second splitrow for? I ask because in the original post the "host" field was not being used by "timechart" so the second splitrow could simply have been removed.
If you can't drop the second splitrow, then it's going to get a little ugly. The bug is that the pivot command is formatting the epoch times into human readable timestamps, so you'll have to reverse that transformation:
| convert timeformat="%Y-%m-%dT%H:%M:%S.%3Q-%z" mktime(_time) as _time | timechart ...
I wish I had a more elegant solution for you 🙂
Thanks, Simon, but that does not seem to work. The timechart command still does not generate output. I also tried variations with "eval strftime/strptime" without success.
Weird, it worked for me locally. The problem with eval strftime/strptime is that they're designed to convert an epoch time into a human-readable string, you need to do the opposite. What version of Splunk are you running?