Good question,
if your table is small then this is not a problem.
if your table is of large size and you want to eliminate full table scans or minimize footprint of your query
lets assume your using oracle for a second... most databases have similar perks.
you could also check with your DBA about optimizing your search queries as well for best performance.
But here are a couple options that come to my mind.
option 1 - You could create a materialized view on the database which only holds no more then x amount of time and then have Splunk query the materialized view with or without the index.
option 2 - You can create a "function based" index on the regular table and allow Splunk to query the tables normally with its concatenated time field used for filtering which should match the function based index created.
basically, the same way you manipulated the field that you are trying to use for filtering could be created as an index.
option 3 - if you have no access to the database or its a vendor databases and your trying to query with limited access.
in your base sub-query
- You could add a second date filter to the original field unmodified that is only date without time and force filter it to less than 24 hours of data. Your Splunk filter will full scan what comes out which is only 24 hours of data in this case instead of full table scan without further indexes available.
Example :
Select A, B, CD
from
(Select A,
B,
A||B as CD
from table Where ENTRY_DATE > (select TRUNC(sysdate)-1 from dual) ) as query1
{{ where ... rising...column....}}
... View more