Reporting

Is it possible to create a form to better present splunk data coming from another app without the use of an index?

sjanwity
Communicator

I have some data which is coming from the Splunk DB Connect app, which I need to present onto a report. I want this report to have filtering functionality, thus necessitating the use of forms. I'm following the Splunk tutorial on how to implement a basic keyword filter on a report through a simple form, but I found that I need to use an index as a intermediary input data store.

I currently can not use any index due to some other issues which in all fairness will be solved soon, so this question is academic only: is it possible to use forms this way without storing the data in an index?

0 Karma
1 Solution

aweitzman
Motivator

You should be able to do this. I just wrote a short simple XML dashboard where the searchTemplate started with | dbquery and it worked just fine, no importing into an index necessary. I was able to use an input parameter from the form in the SQL query itself, as well as outside of the dbquery clause, and it all worked as expected.

View solution in original post

sjanwity
Communicator

Already took the series out to get the no results, and I have no idea why the period is there! The splunk tutorial on form filters has them and the search seems to break without it.

1) http://i.stack.imgur.com/i3aj2.jpg

2) http://i.stack.imgur.com/ksel5.jpg (inverting the less-than greater-than symbols give the same result)

0 Karma

aweitzman
Motivator

(Don't invert the symbols! That will always fail!)

Anyways, looking at image 1, I noticed that your search window from your time picker is one second wide, so you're not likely to get many results in that window anyways. However, the real problem is that info_min_time is greater than UPDATE_TIME. So of course, when you apply the where clause, you will get no results (as in image 2), because your search window does not include those events.

Change your time picker to be a range that includes those events (or, as I had suggested earlier, "All time"), and then go through my earlier suggestion of adding clauses back in and seeing what changes.

0 Karma

sjanwity
Communicator

No results on 'All Time' too. Besides, they should've shown up when I inverted the symbols if that was the issue?

0 Karma

aweitzman
Motivator

Pics like before, of (a) success with "All time" and no where clause, and (b) failure with "All time" and where clause, please?

(If you invert the symbols, you are checking to see if your value is less than the low number, and greater than the high number, simultaneously. This is mathematically impossible. If you are trying to invert the range, you need to invert the symbols and also change the AND to OR.)

0 Karma

sjanwity
Communicator
0 Karma

aweitzman
Motivator

Aha! Splunk can be really annoying sometimes, eh? Looks like "All Time" is a special case search that botches up the info_max_time value. So try:

| where (info_min_time < UPDATE_TIME AND (info_max_time > UPDATE_TIME OR info_max_time = "+Infinity"))

0 Karma

sjanwity
Communicator

Yes that worked!

http://i.stack.imgur.com/t3eLn.jpg

However the next line | transaction OBJECT_KEY FIELD_NAME keeporphans=true maxspan=1s maxevents=2fails to get any results....

Also, tell me about splunk being annoying! The transactions you gave me aren't 100% I'm afraid and I opened another question for this; for some reason it wasn't grouping some rows as transactions even though everything is exactly the same. I am literally exasperated as to what is going on...

EDIT: I renamed the UPDATE_TIME to _time and everything worked! Thank you!

But I am very, very confused as to how?!? What exactly did your added lines do behind the scenes that made the time range picker now work whereas it didn't before?

0 Karma

aweitzman
Motivator

The added lines simply gave some time context to the dbquery results, because out of the box, they're just database entries. They don't get real _time values until they're in a Splunk index, which is when the time picker can work against them. Absent that, taking the metadata about the search (from the addinfo fields) and applying them to an existing time-related field in your dbquery results is a roundabout way of doing the same thing.

Also apparently, you appear to have stumbled upon the fact that while renaming UPDATE_TIME to _time does not have any effect on the time picker (which is why you needed my added lines), it does have an effect on the transaction command. Which kind of makes sense in a way: the time picker values are applied only to the initial dbquery search, so adding a _time field via rename after the fact has no effect there. But since you've generated _time values by the time you run the transaction command, it has what it needs to know to work properly.

As for the transaction not being 100% perfect... sorry, I can't help you there. The only thing I can suggest is seeing whether doing it against data in a real Splunk index versus a dbquery table gets you better results.

0 Karma

sjanwity
Communicator

@aweitzman I've finally reached the end of my epic (unfortunately the solution I discovered yesterday didn't work 100%). You were abit wrong in that transaction sees the updated _time value; I think doing eval _time = UPDATE_TIME just tricks splunk into using some arbitrary _time field. Anycase, I don't fully understand what is going on (other than my report finally works...hopefully) so I've opened a new question in the hopes that someone will explain clearer. I invite you to come along if you're curious on the mysteries of this wonderous transaction command.

0 Karma

sjanwity
Communicator

I'm doomed 😞

We unfortunately only have 1 splunk admin supporting around 100 devs, thus the reason why I have to get all these hacky solutions. Will let you know what happens when I get a proper index to work with, but in your opinion could the index solve alot of more problems?

0 Karma

aweitzman
Motivator

Not surprisingly, Splunk is optimized to work with Splunk indexes, not necessarily outside databases. So at the very minimum, one would expect that putting the data into a Splunk index will increase Splunk performance, since dbquery is going to be less efficient generally. It will also allow you to do a lot more with the data, such as creating summary indexes, that will make it even faster to generate summary statistics and so forth.

And it will allow you to work faster since you won't have to be coding these hacks. (Well, there will still be times when it feels like you're coding hacks, especially since you're coming from SQL and things just don't work the same in Splunk, but you'll still be more productive overall.)

0 Karma

aweitzman
Motivator

The time filter can't work without some manipulation because contents from dbquery don't have a Splunk time associated with them out of the box. If you have a field with the epoch time in it, I believe adding | rename timefield to _time will use that field as the Splunk time, which should then allow for time filtering.

I do not understand what you mean by "Adding a $series$ field...". Can you explain that further, please? For me, both of the following worked, where $formValue$ is the contents of an input box:

| dbquery mydb "SELECT * FROM dbo.mytable where MyField like '%$formValue$%'"

| dbquery mydb "SELECT * FROM dbo.mytable" | search MyField="*$formValue$*"

aweitzman
Motivator

You should be able to do this. I just wrote a short simple XML dashboard where the searchTemplate started with | dbquery and it worked just fine, no importing into an index necessary. I was able to use an input parameter from the form in the SQL query itself, as well as outside of the dbquery clause, and it all worked as expected.

Get Updates on the Splunk Community!

Announcing Scheduled Export GA for Dashboard Studio

We're excited to announce the general availability of Scheduled Export for Dashboard Studio. Starting in ...

Extending Observability Content to Splunk Cloud

Watch Now!   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to leverage ...

More Control Over Your Monitoring Costs with Archived Metrics GA in US-AWS!

What if there was a way you could keep all the metrics data you need while saving on storage costs?This is now ...