Splunk Search

Splunk API - JSON Expecting value: line 1 column 1 (char 0)

bijodev1
Communicator

Hi Team, when I use curl - I am able to get the output in JSON format. 

But when I am trying to use requests module, I am getting json decode error

 

 

 

import requests
import json
from requests.auth import HTTPBasicAuth
search_query={'search':'search earliest = -24h index=* userid=abc123'}
requests.post('https://1.1.1.1/services/search/jobs/export, data=search_query,verify=False, auth=HTTPBasicAuth('admin', 'pass'))
print("Status = "+str(response.status_code) )
print("response text = "+str(response.text))
json_data = json.loads(str(response.text))
json.dump(json_obj)

 

 

 

Labels (2)
Tags (4)
0 Karma
1 Solution

kamlesh_vaghela
SplunkTrust
SplunkTrust

@bijodev1 

May be this will help you.

import requests
import json
# from requests.auth import HTTPBasicAuth
# import urllib3
# urllib3.disable_warnings()
#
#
#
# search_query={'search':'search earliest = -24h index=_internal | stats count by sourcetype'}
# response = requests.post('https://localhost:8000/services/search/jobs/export', data=search_query,verify=False, auth=HTTPBasicAuth('admin', 'admin'))
# print("Status = "+str(response.status_code) )
# print("response text = "+str(response.text))
# # json_data = json.loads(str(response.text))

import requests
import urllib3
from base64 import b64encode
import urllib.parse
from csv import reader

urllib3.disable_warnings()


def fetch_data_using_userid(userid):
    url = "https://localhost:8089/servicesNS/admin/search/search/jobs/export"
    payload = {
        'search': f'search index=_internal earliest=-1h sourcetype="{userid}" | stats count by sourcetype',
        'output_mode': 'json'
    }
    safe_payload = urllib.parse.urlencode(payload)

    userAndPass = b64encode(b"admin:admin123").decode("ascii")
    headers = {
        'Authorization': 'Basic %s' % userAndPass,
        'Content-Type': 'application/x-www-form-urlencoded'
    }
    response = requests.request("POST", url, headers=headers, data=safe_payload, verify=False)
    return response.text



# open file in read mode
with open('user_data.csv', 'r') as read_obj:
    # pass the file object to reader() to get the reader object
    csv_reader = reader(read_obj)
    header = next(csv_reader)
    if header is not None:
        # Iterate over each row in the csv using reader object
        for row in csv_reader:
            # row variable is a list that represents a row in csv
            print(row)
            response_text = fetch_data_using_userid(row[0])
            print(response_text)

 

Thanks
KV


If any of my reply helps you to solve the problem Or gain knowledge, an upvote would be appreciated.

View solution in original post

bijodev1
Communicator

@kamlesh_vaghela thank you so much for the answer, output_mode=json worked for me. Just wanted to add one more thing along with this.

search_query={'search':'search earliest = -24h index=* userid=abc123'}

I have a csv file, which contains a list of userID, I need to use it in the search query against the filed userid, how can I do it, can you guide me how to do it. like what placeOrder i have to use here.

0 Karma

kamlesh_vaghela
SplunkTrust
SplunkTrust

@bijodev1 

First make sure your csv data is accessible from search bar using inputlookup command. like

|inputlookup user_data | table userid

 

then use it into your existing search.

search_query={'search':'search earliest = -24h index=* [|inputlookup user_data | table userid]'}

 

Thanks
KV


If any of my reply helps you to solve the problem Or gain knowledge, an upvote would be appreciated.

0 Karma

bijodev1
Communicator

@kamlesh_vaghela , I want to use the csv using python like a variable

like I assign a each value from csv and assign it to a variable, and call it in the search query and create a table data like a dataFrame.

0 Karma

kamlesh_vaghela
SplunkTrust
SplunkTrust

@bijodev1 

May be this will help you.

import requests
import json
# from requests.auth import HTTPBasicAuth
# import urllib3
# urllib3.disable_warnings()
#
#
#
# search_query={'search':'search earliest = -24h index=_internal | stats count by sourcetype'}
# response = requests.post('https://localhost:8000/services/search/jobs/export', data=search_query,verify=False, auth=HTTPBasicAuth('admin', 'admin'))
# print("Status = "+str(response.status_code) )
# print("response text = "+str(response.text))
# # json_data = json.loads(str(response.text))

import requests
import urllib3
from base64 import b64encode
import urllib.parse
from csv import reader

urllib3.disable_warnings()


def fetch_data_using_userid(userid):
    url = "https://localhost:8089/servicesNS/admin/search/search/jobs/export"
    payload = {
        'search': f'search index=_internal earliest=-1h sourcetype="{userid}" | stats count by sourcetype',
        'output_mode': 'json'
    }
    safe_payload = urllib.parse.urlencode(payload)

    userAndPass = b64encode(b"admin:admin123").decode("ascii")
    headers = {
        'Authorization': 'Basic %s' % userAndPass,
        'Content-Type': 'application/x-www-form-urlencoded'
    }
    response = requests.request("POST", url, headers=headers, data=safe_payload, verify=False)
    return response.text



# open file in read mode
with open('user_data.csv', 'r') as read_obj:
    # pass the file object to reader() to get the reader object
    csv_reader = reader(read_obj)
    header = next(csv_reader)
    if header is not None:
        # Iterate over each row in the csv using reader object
        for row in csv_reader:
            # row variable is a list that represents a row in csv
            print(row)
            response_text = fetch_data_using_userid(row[0])
            print(response_text)

 

Thanks
KV


If any of my reply helps you to solve the problem Or gain knowledge, an upvote would be appreciated.

bijodev1
Communicator

@kamlesh_vaghela thank you so much, it worked perfectly. 

one last thing, can the data be appended or store in a form of dataframe pandas.

0 Karma

kamlesh_vaghela
SplunkTrust
SplunkTrust

@bijodev1 

I've a little know on pandas library.  But you can do it.

In our python script `response_text` will return json string. you can use that json to convert into DataFrame.

You can also get selected value from `response_text` and create dictionary of userid & those selected values to create DataFrame.

Please check below links for the same.

https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.html

https://www.w3schools.com/python/pandas/pandas_dataframes.asp

https://towardsdatascience.com/how-to-convert-json-into-a-pandas-dataframe-100b2ae1e0d8

 

Thanks
KV


If any of my reply helps you to solve the problem Or gain knowledge, an upvote would be appreciated.

kamlesh_vaghela
SplunkTrust
SplunkTrust

@bijodev1 

 

Refer below example code. It is working for me.

 

import requests
import urllib3
from base64 import b64encode
import urllib.parse

urllib3.disable_warnings()

url = "https://localhost:8089/servicesNS/admin/search/search/jobs/export"
payload = {
  'search': 'search index=_internal earliest=-1h | stats count by sourcetype',
  'output_mode':'json'
}
safe_payload = urllib.parse.urlencode(payload)

userAndPass = b64encode(b"admin:admin123").decode("ascii")
headers = {
  'Authorization': 'Basic %s' % userAndPass,
  'Content-Type': 'application/x-www-form-urlencoded'
}
response = requests.request("POST", url, headers=headers, data=safe_payload, verify=False)

print(response.text)

 

Thanks
KV


If any of my reply helps you to solve the problem Or gain knowledge, an upvote would be appreciated.

Get Updates on the Splunk Community!

Enterprise Security Content Update (ESCU) | New Releases

In December, the Splunk Threat Research Team had 1 release of new security content via the Enterprise Security ...

Why am I not seeing the finding in Splunk Enterprise Security Analyst Queue?

(This is the first of a series of 2 blogs). Splunk Enterprise Security is a fantastic tool that offers robust ...

Index This | What are the 12 Days of Splunk-mas?

December 2024 Edition Hayyy Splunk Education Enthusiasts and the Eternally Curious!  We’re back with another ...