Splunk Search

outputlookup timechart followed by inputlookup timechart

Motivator

Let's say that I do an outputlookup after a timechart command. Now I have a csv file that should be formatted for the timechart command. I want to inputlookup that csv and timechart it. How do I do that?

My outputlookup timechart looks like this:

mysearch | timechart useother=f limit=0 count by host | outputlookup count_host.csv

My inputlookup timechart looks like this:

| inputlookup count_host.csv | timechart userother=f limit=0 sum(count) by host ?
0 Karma

Builder

Should work like a charm, if you are using time chart with the fields that you have in your csv file.

0 Karma

Splunk Employee
Splunk Employee

I'm going to answer your question, but as others say if you're simply trying to "speed up" this search then use the following:

  • Scheduled saved search (loading the saved search will reference the cached results from the last run)
  • Report acceleration
  • Data model acceleration
  • Summary indexing

Anyway the answer is

| inputlookup count_host.csv | timechart values(*) as *
0 Karma

Motivator

Your answer is not correct. Splunk returns the following error:

Error in 'timechart' command: Wildcard field '*' cannot be used in xyseries command for xfield or yfield.
  • There seems to be an issue with loading the saved search. It may stop re-running the search after the saved search runs for the first time as a saved search, though.
  • Report acceleration is not kicking off.
  • Datamodel acceleration has not been explored and may be a solution.
  • Summary indexing will of course work, but since it is most naturally suited to searching back in time, it has not yet been determined as the best approach to this use case.
0 Karma

Splunk Employee
Splunk Employee

Please expand mysearch then since you just now specified that you were using | xyseries. I was simply using index=_internal for my base search. I don't understand what's wrong with the saved search. Are you saying it's not running on a schedule? You tried loading it via | loadjob or | savedsearch?

0 Karma

Motivator

I am not using xyseries. I am using timechart.

I don't understand what you mean by "expand mysearch". The dashboard panel that generated the error is:

| inputlookup iis_count_host.csv | timechart values(*) by *

The saved search is scheduled, but the first scheduled time is tonight at midnight.

I have not tried using loadjob or savedsearch with the saved search. Those may be good options ... but then I realized that if those work straight out of the box, then so would | inputlookup so I just now changed the dashboard panel to:

| inputlookup iis_count_host.csv

and it worked! sort of. One of the hosts was grabbed for the time on the x-axis and the time was graphed on the y-axis.

| inputlookup iis_count_host.csv | table _time SRV*

Seems to correct this with the benefit that my servers start with SRV (not applicable in other use cases), but the numbers are low (about 1/5) and the lines do not match the original timechart. Not sure what is going on there.

0 Karma

Splunk Employee
Splunk Employee

Try | loadjob savedsearch=admin:search:foo where admin is your user, search is the app, and foo is the name of the saved search. Then | loadjob will instantly show the result of the last saved search's run (assuming you scheduled it).

0 Karma

Motivator

loadjob seems to have exactly the same effect as inputlookup. The results appear to be exactly the same and therefore to have the same issues (described above). Timechart will not work on either with my current approaches.

0 Karma

Motivator

I figured it out. The issue is two-fold on the savedsearch. First, the savedsearch has to be kicked off by the schedule and finish. Second, the timechart has to have the _time as the first column and has to have sum(*) AS *.

| loadjob savedsearch=username:search:report_iis_events_host | table _time SRV* | timechart sum(*) AS *

..which leaves the issue of putting the _time value first in the list of fields. Easier in my case where the fields all began with SRV.

Or the dashboard panel can simply be tied directly to the saved search.

0 Karma

Splunk Employee
Splunk Employee

timechart doesn't care the order of the fields, as long as the previous result has _time and the other fields that you're performing timechart on. This works: index=_internal | table host _time | timechart count by host. I still don't understand why | inputlookup foo.csv | timechart values(*) as * doesn't work for you. Lastly if your saved search already ends with timechart then you don't need anything but the loadjob command.

0 Karma

Motivator

If I just use the loadjob and look in visualizations, then it takes the first field as the x-axis. But you are right that if I pipe it into the timechart values() AS * then it does not care about the field order and plots correctly but the time ends at the current hour. For a saved search covering yesterday, only half the timechart is filled with data while the remaining time for data which lacks data remains empty. It would be nice to tell it to stop where the data stops. My current solution for that is to do search field1=

0 Karma

Splunk Employee
Splunk Employee

If you do timechart for the past 24 hours but there is only data for the past hour, then you'll see 0's for the first 23 hours, that's just how timechart works. I suggest you play around with a variety of data to get used to timechart. As I said before, the order doesn't matter before timechart (it just wants _time). It sounds like you changed the order after timechart and then tried to loadjob it.

0 Karma

Motivator

I am describing a saved search, which was run for the previous day, loaded into timechart. Timechart seems to have filled in 0s for all times from the last time value in the saved search data all the way to the current time.

I did not change the order of the data. Splunk seems to naturally re-order the fields alphabetically when the data is loaded from loadjob or from inputlookup.

0 Karma

Splunk Employee
Splunk Employee

There is something wrong with your search and hence the result of your search. index=_internal | timechart count by host | outputlookup foo.csv then | inputlookup foo.csv | timechart values(*) as * works.

0 Karma

Motivator

I'm assuming you want to do this to make it more efficient... why don't you simply use acceleration on a saved search version of the mysearch | timechart ... ?

0 Karma

Motivator

Trying to use search acceleration but it is doing nothing.

Summary ID Normalized Summary ID Reports Using Summary Summarization Load Access Count Summary Status


reportiisevents_user
0.0000 0 Last Access: Never Summarization not started Updated: Never


reportiisevents_host
0.0000 0 Last Access: Never Summarization not started Updated: Never

My guess is that the data to be summarized simply isn't large enough to kick off the acceleration process.

0 Karma

SplunkTrust
SplunkTrust

What you suggest seems equivalent to

mysearch | timechart useother=f limit=0 count by host | timechart useother=f limit=0 count by host

which I wouldn't expect to work.

What exactly are you trying to accomplish?

---
If this reply helps you, an upvote would be appreciated.
0 Karma

Motivator

The goal is to quickly pull a timechart to a dashboard panel as a timechart.

0 Karma