All Apps and Add-ons

Add-on or KV Stores approach



So I'm trying to build a dashboard that will provide statistics (& visualizations) based on 2 types of data. Here is some background information:
1. Data that is fed into Splunk (say, firewall logs)
2. Data fetched a third party API (note: data is fetched on a daily-basis and only the latest data pull should be considered for any analysis)

A restriction is I don't wish to run a script outside of the Splunk environment (no external script fetching data from API).

I have a few questions:

  1. Is there a way from within the add-on builder to request data from the third-party API and feed it to a KV store?
    I've taken a look at the Add-on builder documentation; modular input using REST API and the Python helper functions, but those force the fetched data to be written to an index using writeEvent method. Any other approach?

  2. I've chosen the index-based approach. I feed the data into the index. In order to correlate data, I built a data model, but is there a way to perform lookup operation between 2 indexes ? (So I have IP in the firewall log and ip being fetched from API as well — I want to perform A field based lookup, if that makes it more clear)

  3. Another big question: so firewall logs have {src_ip, dest_ip} and from the third party API, I get {ip, score}. Is there a way to perform mapping of IP to src_ip and dest_ip at the same and join the score automatically for both in one query?
    Currently I'm using 2 different queries (one for src and one for dest)

    | from datamodel:"Network.All_Traffic" | join type=left src_ip [ search source=third_party_api_data | rename ipAddress AS src_ip]
    | from datamodel:"Network.All_Traffic" | join type=left dest_ip [ search source=third_party_api_data | rename ipAddress AS dest_ip]
    then i am forced to perform an outer join combining the above two datasets then performing a DEDUP on the ip and score to get A UNIQUE SCORE for every ip. = Which is CRASHING my splunk instance? - it that because I'm having 2 streaming datasets (being updated every second for new logs) > perform mapping based on src and dest then join those and then perform dedup?

Do advise 🙂

0 Karma

Splunk Employee
Splunk Employee

| from datamodel:"Network.All_Traffic" | join type=left src_ip [ search source=third_party_api_data | rename ipAddress AS src_ip | stats latest(score) AS src_score BY ipAddress ]
something like that should limit the amount of data returned by your subsearch (for the crashing bit, just guessing at the cause though).

If you make this third_party_api_data a lookup, then you can actually just do automatic lookups in your datamodel, much easier, and no need for two searches

There is already apps on splunkbase that can pull in data from REST endpoint, just do a scheduled search with a | outputlookup at the end to put the output in a lookup:


I'll definitely take a look at those add-ons, thanks for the share.

The reason I've not invested in third-party add-ons as this project uses confidential data so clubbing together third-party add-ons along with it is going to be a big NO from management. But I should be able to understand how they do the request using a Splunk search query and use it to my advantage..

If not, I'll try out out the query you shared at the start of the message and see if that helps with the crashing.

0 Karma


New Error "Error in 'SearchProcessor': Found circular dependency when expanding from.Network_Traffic.All_Traffic"

0 Karma


I am a little confused about #1. It's actually easier to create a lookup to store the fetched data, if you do not want to index it as you hint at.

#2. This can also be solved with a lookup. Just populate the lookup with the outputlookup command, referencing the first index. Then run the query on the second index, referencing your lookup with inputlookup command.

I leave #3 for someone with a little more experience, thus my reason for submitting this as a comment and not an Answer.

0 Karma


1 So yeah I did create a python script fetch the data and store in KV . But using the add-on builder, it not possible or at least based on my reading of the documentation. Here link to the Addon Builder's Python Functions. They do have checkpoints but that just stores 1 key and 1 value; so thats pretty much useless.

Can you share some documentation / snippet that allow feeding data into KV from add-ons ?

2 Yeah if lookups are used, then I've tried that. But using the index based approach is what I'm doing as I cannot achieve (1) using lookups.

0 Karma
Get Updates on the Splunk Community!

Splunk Lantern | Spotlight on Security: Adoption Motions, War Stories, and More

Splunk Lantern is a customer success center that provides advice from Splunk experts on valuable data ...

Splunk Cloud | Empowering Splunk Administrators with Admin Config Service (ACS)

Greetings, Splunk Cloud Admins and Splunk enthusiasts! The Admin Configuration Service (ACS) team is excited ...

Tech Talk | One Log to Rule Them All

One log to rule them all: how you can centralize your troubleshooting with Splunk logs We know how important ...