Splunk Search

Splunk DB Connect - dbquery inline search and time filtering not working

guilmxm
SplunkTrust
SplunkTrust

Hi all,

I am currently working on various Dashboards for my company, for some of them i need to request data from local databases using DB Connect (data are nmon monitoring of AIX Lpar).

As i am using "dbquery" command, i'm not indexing any local data and this only inline searches. (locally indexing data from these DB is possible with DB Connect but if would represent a large amount of data being redundant in our information system)

This works very fine, but my problem is that i can't get any time filter to work, as far using inline search than timepicker in my XML code.

Example search:

| dbquery nmondb "select hostname as hostname, ZZZZ as timestamp, ((EC_User_PCT+EC_Sys_PCT+EC_Wait_PCT+EC_Idle_PCT)*entitled/100) as conso,((EC_User_PCT+EC_Sys_PCT+EC_Wait_PCT+EC_Idle_PCT)*entitled/100)/virtualCPUs as percent_conso from lpar where year(ZZZZ)=2013;" limit=100 earliest=-h | table * | fields timestamp, hostname, conso, percent_conso

Output sample:

    ZZZZ    hostname    conso   percent_conso

1 1357858873.000 gpXXXXX 0 0
2 1357859114.000 gpXXXXX 8.65223975375652 0.508955279632737
3 1357859354.000 gpXXXXX 10.4659196991014 0.615642335241262
4 1357859594.000 gpXXXXX 10.5134396686173 0.618437627565721

the fields "ZZZZ" contains the SQL column date (in SQSL dateformat), for Splunk to understands the timestamp, i successfully used this example using "rename", then Splunk seems to understand the conversion and show a human readable timestamp:

| dbquery nmondb "select hostname as hostname, ZZZZ, ((EC_User_PCT+EC_Sys_PCT+EC_Wait_PCT+EC_Idle_PCT)*entitled/100) as conso,((EC_User_PCT+EC_Sys_PCT+EC_Wait_PCT+EC_Idle_PCT)*entitled/100)/virtualCPUs as percent_conso from lpar where year(ZZZZ)=2013;" limit=100 earliest=-h | table * | rename ZZZZ As _time | fields _time, hostname, conso, percent_conso

Output sample:

    _time   hostname    conso   percent_conso

1 1/11/13 12:01:13.000 AM gpXXXXX 0 0
2 1/11/13 12:05:14.000 AM gpXXXXX 8.65223975375652 0.508955279632737
3 1/11/13 12:09:14.000 AM gpXXXXX 10.4659196991014 0.615642335241262
4 1/11/13 12:13:14.000 AM gpXXXXX 10.5134396686173 0.618437627565721

So a value of initial SQL timestamp "1357858873.000" is converted by Splunk into "1/11/13 12:01:13.000 AM"

This works with timechart BUT i can't get any time filtering to work, even inside the search itself, i guess renaming the filed is not enough...

I really need this to work to successfully introduce Splunk into our company...

Thanks you very much for your help!

0 Karma

brunton2
Path Finder

Here is pretty seamless method to achieve the desired result, at least for my needs with an Oracle DB. I should add that this method differs from most of the other solutions by pre-filtering the query results using the database server rather than post-filtering the results in Splunk (which was my requirement due to the number of rows an unbounded time query would return)

Define the timepicker with an 'on change' eval to transform the selection into a converted start and end date/time string:

<input type="time" token="timerange">
  <label>Date Range</label>
  <default>
    <earliest>-2d@d</earliest>
    <latest>now</latest>
  </default>
  <change>
      <eval token="form.et">strftime(relative_time(now(),'earliest'), "%F %T")</eval> 
      <eval token="form.lt">strftime(relative_time(now(),'latest'), "%F %T")</eval> 
  </change>
</input>

Then use the string tokens directly in the dbquery syntax:

....  AND (TIME_STAMP >= TO_DATE('$form.et$', 'YYYY-MM-DD HH24:MI:SS') AND TIME_STAMP <= TO_DATE('$form.lt$', 'YYYY-MM-DD HH24:MI:SS'))

I was trying to find a decent solution to this for months!

jimdiconectiv
Path Finder

I like this approach and am implementing it, but I have noticed one wrinkle/issue. The times delivered in the XML form will be client times (usually laptop) and will be offset by the time zone of the client (not the server time, not Splunk Time for the user).

The strftime function makes it possible to find the offset of the client time zone (in my current case -400, EDT vs UTC), but not easily do the math to convert to UTC. I have a developing work-around I will post but wonder if anyone has run into this and has something neat.
(adding twice so comment goes to brunton2 )

0 Karma

brunton2
Path Finder

Unfortunately our client time and Splunk time is all within the same timezone so we haven't had to consider this problem. However, we have recently been assessing migrating to DBconnect v2 and have observed that timestamps don't convert natively so we ended up implementing a stored procedure to translate to back to unix epoch for use with Splunk. Not sure if you could use a similar technique to supply the client TZ and have the backend DB convert the timestamps accordingly? The following was our example for conversion that we wrap the timestamp selects with

create or replace function date_to_epoch(in_date in date) return number is
  epoch number(38);
  begin
  select (in_date - to_date('1-1-1970 00:00:00','MM-DD-YYYY HH24:Mi:SS'))*24*3600 into epoch from dual;
  return epoch;
  exception when others then
  null;
  end;
0 Karma

volsunghawk
Engager

This works beautifully with my MySQL db. I used the apply_timerange macro above, as well (which also works very well!), but we hoped to find a way to filter within the query itself instead of pulling in a ton of results and then filtering in Splunk. Thanks!

0 Karma

psw_rchapin
New Member

In my approach I move all of the messing about in to the SQL query. I capture the time range picker min and max time values as variables. Convert those numeric unix time values to datetime values. I handle the '+infinity' value for all time by using a case statement and replacing '+infinity' with the current date +100 years (yah... this is a bit hacky... i know). I then handle UTC offset by calculating the difference between getdate() and getutcdate() (MS SQL server approach) and then adjusting the times by the stored offset value. Lastly I eval _time to be equal to the timestamp field that is returned.

I then save this as a saved search and the future execution is super simple.

Here is how I accomplished this:

| dbquery "DB DISPLAY NAME" [ stats count | head 1 | addinfo | eval query="

-- Splunk time range picker handling

declare @StartRangeTime varchar(50) = '".info_min_time."'
declare @EndRangeTime varchar(50) = '".info_max_time."'
declare @StartDate datetime = DATEADD(second, cast(@StartRangeTime as decimal),'1970-01-01')
declare @EndDate datetime = (select case 
    when @EndRangeTime = '+Infinity' 
        then DATEADD(year, 100, GETDATE())
    else DATEADD(second, cast(@EndRangeTime as decimal),'1970-01-01')
end) 
declare @UTCOffset smallint = (select datediff(hour, getutcdate(), getdate()))

set @StartDate = dateadd(hour, @UTCOffset, @StartDate)
set @EndDate = dateadd(hour, @UTCOffset, @EndDate)

-- End Splunk time range picker handling

-- SQL query below ----------

  select FIELDS
  from TABLE
  where DATETIMEFIELD between @StartDate and @EndDate 

-- SQL query above ----------

" | fields query | format "" "" "" "" "" ""] | eval _time = DATETIMEFIELD  

And then my saved query looks like this:

| savedsearch "SAVED SEARCH NAME"
0 Karma

sowings
Splunk Employee
Splunk Employee

I like ziegfried's answer, and it's quite effective. I took a slightly different approach, but I'll point out that my way doesn't (yet) work within a dashboard. My method involves adjusting the query string that gets sent to the database, inserting the desired time endpoints, rather than simply taking the first X rows coming back from the DB and then filtering them down. The gist is that you have to convert addinfo's info_min_time and info_max_time into something the database can understand, and apply it to your time stamp field. Note that you'll also have to tweak your query slightly.

First, I rewrite my query slightly to contain placeholder variables for the start and end times:

SELECT * FROM TABLE WHERE CONDITION AND TIMESTAMP >= '%earliest\_time%' AND TIMESTAMP < '%latest\_time%'

Now, in order to get those replaced with meaningful values, I took a little journey. The destination was this:

[ | stats count
  | addinfo
  | eval et=round(info\_min\_time, 0)
  | eval lt=if(info\_max\_time="+Infinity", "now", round(info\_max\_time, 0)
  | convert timeformat="%Y-%m-%d %k:%M:%S" ctime(et), ctime(lt)
  | eval sql\_str="\"SELECT * FROM TABLE WHERE CONDITION AND TIMESTAMP >= '%earliest\_time%' AND TIMESTAMP < '%latest\_time%'\""
  | eval sql\_str=replace(sql\_str, "%earliest\_time%", et)
  | eval sql\_str=replace(sql\_str, "%latest\_time%", lt)
  | return $sql\_str ]

This goes after your |dbquery <DATABASE> search command. I also put this into a macro where the one input was the SQL string. I had to escape some quotes, though, so the input looked like this:

\`db\_timed\_search("\\\\\\"SELECT * FROM TABLE WHERE CONDITION AND TIMESTAMP >= '%earliest\_time%' AND TIMESTAMP < '%latest\_time%'\\\\\"")\`

EDIT: It is possible to use this in a dashboard. The issue is that simple XML and advanced XML want to pre-parse the search string, and in this case, the token replacement causes the parser to burp. If you convert your dashboard to use Sideview Utils, you can use this approach in a dashboard / form just like any other search.

Cuyose
Builder

Ok, I got this to work. For some reason when adding via the manager it wouldnt take ,but I added it to the macros.conf as the original answer showed and it's working, this is great!

0 Karma

Cuyose
Builder

Im still having the issue of getting errors unknown search parameter apply and the error in search parser... I have the macro set in the advanced search pane, but can't get the search to use it.

| dbquery magentoprod "select sfo.created_at,sfo.increment_id, sfo.customer_id,sfo.tax_amount, oa.city,oa.postcode, oa.region ,oa.country_id
from sales_flat_order sfo
join sales_flat_order_address oa
on sfo.shipping_address_id=oa.entity_id
and sfo.tax_amount>0
and sfo.state='processing'"| 'apply_timerange(created_at)'| eval postcode=substr(postcode, 1, 5) |convert timeformat="%m/%d/%Y %H:%M:%S" ctime(created_at)| dedup increment_id| RENAME postcode as zip | lookup L_zipDB zip OUTPUT county |sort 0 increment_id

0 Karma

ziegfried
Influencer

The dbquery command is not meant to do any kind of filtering. It simply emits the results of the SQL query. There is no clean approach to filtering results for the selected search timerange in a generic way as a SQL query might emit no, 1 or multiple viable candidates for an event timestamp field.

That being said, you can use SPL (the Splunk search language) to filter the results in the search pipeline. The simplest approach is to use the where command to:

| dbquery mydb "SELECT timestamp_column, other_column1, other_column2 FROM mytable" | where timestamp_column>relative_time(now(),"-7d")

In this example, the results emitted by dbquery would be reduced to those, that match the condition in the where command and will only output results where the value of timestamp_column is within the last 7 days.

A little more sophisticated approach is to additionally use the addinfo command. This adds some meta information about the search to the search results. This allows to actually honor the timerange seleted in the timerange picker.

Here's little search macro that should ease the usage:

[apply_timerange(1)]
args = timefield
definition = addinfo | where $timefield$>=info_min_time AND (info_max_time="+Infinity" OR $timefield$<=info_max_time) | fields - info_min_time info_max_time info_search_time info_sid
iseval = 0

You can add this stanza to macros.conf or add it via Splunk Manager (Advanced Search -> Search Macros). The usage is fairly simple:

| dbquery mydb "SELECT timestamp_column, other_column1, other_column2 FROM mytable" | `apply_timerange(timestamp_column)`

xaxvier
Engager

A lifesaver! Thank you!

0 Karma

kumagaur
New Member

| where timestamp_column>relative_time(now(),"-7d") not working for dbxquery...Is there anything similar we can do with DBXQUERY.

0 Karma

kumagaur
New Member

Any luck ?

0 Karma

brunton2
Path Finder

Try the following (dbxquery doesn't automatically convert times):

| eval _time=strptime(timestamp_column,"%Y-%m-%d %H:%M:%S") | where _time>relative_time(now(),"-7d")

jossplacencia
New Member

one question, timefield is the token from the timepicker?

I have this error

Error in 'SearchParser': The search specifies a macro 'apply_timerange' that cannot be found. Reasons include: the macro name is misspelled, you do not have "read" permission for the macro, or the macro has not been shared with this application. Click Settings, Advanced search, Search Macros to view macro information.

my macro is public and shared with the application , please help

0 Karma

koprai
Explorer

Thanks a lot.. works like a charm

0 Karma

ihayesjr
New Member

Never mind. I got it to work. I put the macro in a different spot of the search and it started to work.

0 Karma

ihayesjr
New Member

I cannot get the macro to work. I keep getting an error stating "Error in 'SearchParser': Missing a search command before '''.

0 Karma

CerielTjuh
Path Finder

I can only add my opinion: great work ! This helped me a lot !!!

0 Karma

abchernin
Engager

Dude, you've no idea how much you've helped me with this. I was faced with the prospect of implementing couple dozens data (and mostly date) formatting SQL views which would be hell to maintain, all just to construct two decent dashboards.

If you're ever in Moscow, beer's on me.

0 Karma

guilmxm
SplunkTrust
SplunkTrust

I really want to thank you, you perfectly answered to my question and my needs, both of your solution works as expected.

The macro solution is the one i adopted for my dashboards, it works very fine, now i can connect Splunk to our DB as we need to.

Again thank you very much for your quick and great answer.

0 Karma
Get Updates on the Splunk Community!

Index This | I am a number, but when you add ‘G’ to me, I go away. What number am I?

March 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...

What’s New in Splunk App for PCI Compliance 5.3.1?

The Splunk App for PCI Compliance allows customers to extend the power of their existing Splunk solution with ...

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...