All Apps and Add-ons

How does DB connect handle a row update?

cmeo
Contributor

I've been looking in the docs and in Answers for a solution to this problem. Say, for example, I want to look up a customer table with a rising ID field. Fine and dandy. But what happens if the customer's details change, for instance their address or phone changes? Splunk already has an event with this identifier, and won't re-index it as far as I can tell. But even if it did, you'd then have two events for the customer.
What needs to happen is to make the first event go away e.g. using | delete. But I don't see anywhere that DB connect has this functionality. If the DB records change rarely, is there any other solution than giving this source it own index, and dropping the whole thing and re-indexing if there is one change?

So...what gives?

0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

You have a couple of options.

1) DB Connect will re-index a changed row if the rising_column value for that row is higher than the last rising_column value read. Using a 'modificationTime' column as the rising_column would do here. Your query would then need to use the dedup command to filter out duplicate rows.
2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

jayh
Loves-to-Learn

Hello,

Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk. could be implemented?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

You have a couple of options.

1) DB Connect will re-index a changed row if the rising_column value for that row is higher than the last rising_column value read. Using a 'modificationTime' column as the rising_column would do here. Your query would then need to use the dedup command to filter out duplicate rows.
2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Observe and Secure All Apps with Splunk

  Join Us for Our Next Tech Talk: Observe and Secure All Apps with SplunkAs organizations continue to innovate ...

Splunk Decoded: Business Transactions vs Business IQ

It’s the morning of Black Friday, and your e-commerce site is handling 10x normal traffic. Orders are flowing, ...

Fastest way to demo Observability

I’ve been having a lot of fun learning about Kubernetes and Observability. I set myself an interesting ...