All Apps and Add-ons

splunk DB connect - Rising column issue

arun_kant_sharm
Path Finder

Hi Experts,

I am using below SQL to get the last running commands and its stats and users information.
For this I make LAST_LOAD_TIME as a rising column, and execute this query in every 10 min.

V$SQLAREA.LAST_LOAD_TIME,
V$SQLAREA.LAST_ACTIVE_TIME

But in V$SQLAREA is populating data very frequently, so using timestamp as a rising column is not a good idea, because in this scenario I lost boundary conditions event.

Please suggest how I fetch all event in my Index.

SELECT SESION.SID,
SESION.USERNAME,
SESION.PROCESS,
SQLAREA.SHARABLE_MEM,
SQLAREA.PERSISTENT_MEM,
SQLAREA.RUNTIME_MEM,
SQLAREA.FETCHES,
SQLAREA.EXECUTIONS,

CASE
WHEN executions > 0 THEN
ROUND(elapsed_time/executions,3)
ELSE NULL
END elap_per_exec,
SQLAREA.END_OF_FETCH_COUNT,
SQLAREA.LOADS,
SQLAREA.FIRST_LOAD_TIME,
SQLAREA.DISK_READS,
SQLAREA.DIRECT_WRITES,
SQLAREA.BUFFER_GETS,
SQLAREA.PLSQL_EXEC_TIME,
SQLAREA.JAVA_EXEC_TIME,
SQLAREA.ROWS_PROCESSED,
SQLAREA.COMMAND_TYPE,
SQLAREA.OPTIMIZER_MODE,
SQLAREA.OPTIMIZER_COST,
SQLAREA.PARSING_SCHEMA_ID,
SQLAREA.PARSING_SCHEMA_NAME,
SQLAREA.MODULE,
SQLAREA.SERIALIZABLE_ABORTS,
SQLAREA.CPU_TIME,
SQLAREA.ELAPSED_TIME,
SQLAREA.REMOTE,
SQLAREA.OBJECT_STATUS,
SQLAREA.LAST_LOAD_TIME,
SQLAREA.LAST_ACTIVE_TIME
FROM V$SQLAREA SQLAREA, V$SESSION SESION
WHERE SQLAREA.HASH_VALUE = SESION.SQL_HASH_VALUE (+)
AND SQLAREA.ADDRESS = SESION.SQL_ADDRESS (+)
AND SQLAREA.LAST_LOAD_TIME > ?
ORDER BY SQLAREA.LAST_LOAD_TIME ASC

0 Karma

diogofgm
SplunkTrust
SplunkTrust

Time fields is often not the best choice to use as a raising column as you might not catch all events in a very busy db. What is the time requirements for your use case? if indexing data with some degree of delay is not a big issue, you can try to build your sql query looking into the last complete hour (e.g. 1 PM till 2 PM) and run a db connect batch input a bit after the upper boudary (e.g. 2:30 PM). This way you're able to look into a "fixed" time interval and index it into splunk. The only risk of losing events like this is if you add events to the DB with an older value for time field.

If 1h is to much you can also in theory shorten the interval to every 10min and use the same logic to the indexing time (5min after the upper boundary) and still use batch input but I would says might be more risky to something to miss.

------------
Hope I was able to help you. If so, some karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

NEW! Log Views in Splunk Observability Dashboards Gives Context From a Single Page

Today, Splunk Observability releases log views, a new feature for users to add their logs data from Splunk Log ...

Last Chance to Submit Your Paper For BSides Splunk - Deadline is August 12th!

Hello everyone! Don't wait to submit - The deadline is August 12th! We have truly missed the community so ...

Ready, Set, SOAR: How Utility Apps Can Up Level Your Playbooks!

 WATCH NOW Powering your capabilities has never been so easy with ready-made Splunk® SOAR Utility Apps. Parse ...