All Apps and Add-ons

How does DB connect handle a row update?

cmeo
Contributor

I've been looking in the docs and in Answers for a solution to this problem. Say, for example, I want to look up a customer table with a rising ID field. Fine and dandy. But what happens if the customer's details change, for instance their address or phone changes? Splunk already has an event with this identifier, and won't re-index it as far as I can tell. But even if it did, you'd then have two events for the customer.
What needs to happen is to make the first event go away e.g. using | delete. But I don't see anywhere that DB connect has this functionality. If the DB records change rarely, is there any other solution than giving this source it own index, and dropping the whole thing and re-indexing if there is one change?

So...what gives?

0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

You have a couple of options.

1) DB Connect will re-index a changed row if the rising_column value for that row is higher than the last rising_column value read. Using a 'modificationTime' column as the rising_column would do here. Your query would then need to use the dedup command to filter out duplicate rows.
2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

jayh
Loves-to-Learn

Hello,

Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk. could be implemented?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

You have a couple of options.

1) DB Connect will re-index a changed row if the rising_column value for that row is higher than the last rising_column value read. Using a 'modificationTime' column as the rising_column would do here. Your query would then need to use the dedup command to filter out duplicate rows.
2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Career Survey
First 500 qualified respondents will receive a $20 gift card! Tell us about your professional Splunk journey.

Can’t make it to .conf25? Join us online!

Get Updates on the Splunk Community!

Leveraging Automated Threat Analysis Across the Splunk Ecosystem

Are you leveraging automation to its fullest potential in your threat detection strategy?Our upcoming Security ...

Can’t Make It to Boston? Stream .conf25 and Learn with Haya Husain

Boston may be buzzing this September with Splunk University and .conf25, but you don’t have to pack a bag to ...

Splunk Lantern’s Guide to The Most Popular .conf25 Sessions

Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data ...