I am trying to setup a monitor to tail a table in my Postgres DB. I have a TIMESTAMP column that I am specifying as the rising column. It does not seem to be pulling the data in.
I can query the DB just fine with the exception that the timestamp is formatted in epoch.
I have a directory monitor that is working just fine. Can someone please give me a few pointers on how to get my table monitor working?
Thank you,
j
The timestamp being emitted in epoch format is actually intended behavior.
Regarding your input not indexing your events: Can you please provide further information?
I did try to upgrade and it didn't seem to help the monitor. Actually it seems the change made the errors coming back from the DB less clear in Splunk. That is not a big deal the logs in the DB show exactly what happened to the query but it was worth noting.
You might want to try the database monitor again with the latest version. There was a potential problem with the postgres JDBC driver that is now being taken care of.
Forgot to mention I am not seeing any log issues for the monitor.
This still did not solve the monitor issue. Maybe I don't need it. I seem to be able to save queries and put the results in a dashboard. Should I just ignore doing a monitor on the DB tables?
(After you changed the file, a restart is necessary in order for DB Connect to pick up the changes)
You can configure the input to fetch new data from now on by specifying
tail.follow.only = true
in the [dbmon-tail://MyType/Events]
stanza in inputs.conf.
cat ./etc/apps/dbx/local/inputs.conf
[script://$SPLUNK_HOME/etc/apps/dbx/bin/jbridge_server.py]
disabled = 0
[batch://$SPLUNK_HOME/var/spool/dbmon/*.dbmonevt]
crcSalt =
[dbmon-tail://MyType/Events]
output.format = kv
output.timestamp = 0
table = events
tail.rising.column = uploaded
host = MyHost
index = default
interval = auto
sourcetype = MyType
disabled = 0
cat ./etc/apps/dbx/default/inputs.conf
[script://$SPLUNK_HOME/etc/apps/dbx/bin/jbridge_server.py]
index = _internal
sourcetype = dbx_jbridge
interval = 0
disabled = true
passAuth = admin
[script://$SPLUNK_HOME\etc\apps\dbx\bin\jbridge_server.py]
index = _internal
sourcetype = dbx_jbridge
interval = 0
disabled = true
passAuth = admin
With your help I was able to see what looks like a memory issue. But I have no idea what to do with it. The table I have is large and I just want to start data from now forward.
2012-12-19 15:27:16.031 monsch1:ERROR:Scheduler - Caught ExecutionException: java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
java.util.concurrent.ExecutionException: java.lang.OutOfMemoryError: Java heap space
2012-12-19 15:27:16.031 dbx4409:INFO:ExecutionContext - Execution finished in duration=24530 ms