Splunk Dev

Load csv from GCP into a KVStore lookup using the Python SDK

cdhippen
Path Finder

We currently have a 45mb csv file that we're going to be loading into a Splunk kvstore. I want to be able to accomplish this via the python SDK but I'm running into a bit of trouble loading the records.

The only way I can find to update a kvstore is the service.collection.insert() function which as far as I can tell only accepts 1 row at a time. Being that we have 250k rows in this file, I can't afford to wait for all lines to upload every day.

This is what I have so far:

from splunklib import client, binding
import json, pandas as pd
from copy import deepcopy

data_file = '/path/to/file.csv'

username = 'user'
password = 'splunk_pass'
connectionHandler = binding.handler(timeout=12400)
connect_kwargs = {
    'host': 'splunk-host.com',
    'port': 8089,
    'username': username,
    'password': password,
    'scheme': 'https',
    'autologin': True,
    'handler': connectionHandler
}
flag = True
while flag:
    try:
        service = client.connect(**connect_kwargs)
        service.namespace['owner'] = 'Nobody'
        flag = False
    except binding.HTTPError:
        print('Splunk 504 Error')

kv = service.kvstore
kv['test_data'].delete()
df = pd.read_csv(data_file)
df.replace(pd.np.nan, '', regex=True)
df['_key'] = df['key_field']
result = df.to_dict(orient='records')
fields = deepcopy(result[0])
for field in fields.keys():
    fields[field] = type(fields[field]).__name__
df = df.astype(fields)
kv.create(name='test_data', fields=fields, owner='nobody', sharing='system')
for row in result:
    row = json.dumps(row)
    row.replace("nan", "'nan'")
    kv['learning_center'].data.insert(row)
transforms = service.confs['transforms']
transforms.create(name='learning_center_lookup', **{'external_type': 'kvstore', 'collection': 'learning_center', 'fields_list': '_key, userGuid', 'owner': 'nobody'})
# transforms['learning_center_lookup'].delete()
collection = service.kvstore['learning-center']
print(collection.data.query())

In addition to the problem of taking forever to load a quarter million records, it keeps failing on a row with nan as the value, and no matter what I put in there to try to deal with the nan, it persists in the dictionary value.

Labels (3)
0 Karma

starcher
Influencer
0 Karma

cdhippen
Path Finder

Is there no way to do this with the Splunk Python SDK?

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

[Puzzles] Solve, Learn, Repeat: Character substitutions with Regular Expressions

This challenge was first posted on Slack #puzzles channelFor BORE at .conf23, we had a puzzle question which ...

Splunk Community Badges!

  Hey everyone! Ready to earn some serious bragging rights in the community? Along with our existing badges ...

[Puzzles] Solve, Learn, Repeat: Matching cron expressions

This puzzle (first published here) is based on matching timestamps to cron expressions.All the timestamps ...