All Apps and Add-ons

How does DB connect handle a row update?

cmeo
Contributor

I've been looking in the docs and in Answers for a solution to this problem. Say, for example, I want to look up a customer table with a rising ID field. Fine and dandy. But what happens if the customer's details change, for instance their address or phone changes? Splunk already has an event with this identifier, and won't re-index it as far as I can tell. But even if it did, you'd then have two events for the customer.
What needs to happen is to make the first event go away e.g. using | delete. But I don't see anywhere that DB connect has this functionality. If the DB records change rarely, is there any other solution than giving this source it own index, and dropping the whole thing and re-indexing if there is one change?

So...what gives?

0 Karma
1 Solution

richgalloway
SplunkTrust
SplunkTrust

You have a couple of options.

1) DB Connect will re-index a changed row if the rising_column value for that row is higher than the last rising_column value read. Using a 'modificationTime' column as the rising_column would do here. Your query would then need to use the dedup command to filter out duplicate rows.
2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk.

---
If this reply helps you, Karma would be appreciated.

View solution in original post

0 Karma

jayh
Loves-to-Learn

Hello,

Can you please extend on how option 2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk. could be implemented?

0 Karma

richgalloway
SplunkTrust
SplunkTrust

You have a couple of options.

1) DB Connect will re-index a changed row if the rising_column value for that row is higher than the last rising_column value read. Using a 'modificationTime' column as the rising_column would do here. Your query would then need to use the dedup command to filter out duplicate rows.
2) Consider periodically reading the database in batch mode into a lookup file (or KV store). Each read would overwrite the existing lookup file so you'd only have the most recent data in Splunk.

---
If this reply helps you, Karma would be appreciated.
0 Karma
Get Updates on the Splunk Community!

Splunk Mobile: Your Brand-New Home Screen

Meet Your New Mobile Hub  Hello Splunk Community!  Staying connected to your data—no matter where you are—is ...

Introducing Value Insights (Beta): Understand the Business Impact your organization ...

Real progress on your strategic priorities starts with knowing the business outcomes your teams are delivering ...

Enterprise Security (ES) Essentials 8.3 is Now GA — Smarter Detections, Faster ...

As of today, Enterprise Security (ES) Essentials 8.3 is now generally available, helping SOC teams simplify ...