Splunk Dev

Help with the Splunk Add-on Builder - creating a custom Python app

tomapatan
Contributor

Hi Everyone,

The issue with the code below appears to be with the values of the {report_id} variable not being passed correctly to the download_report function, in particular this line:

 

 

 

url = f"https://example_url/{report_id}/download"

 

 

 

If I hardcode the url with a valid token, instead of the {report_id} variable,  the report gets downloaded, as expected.

Any help would be much appreciated !

Full code below:

 

 

 

import requests
 
def collect_events(helper, ew):
    """
    Main function to authenticate, generate report ID, and download the report.
    """
    username = helper.get_arg('username')
    password = helper.get_arg('password')
    auth_url = "https://example_url/auth"
    headers = {
        'Content-Type': 'application/x-www-form-urlencoded',
    }
    data = {
        'username': username,
        'password': password,
        'token': 'true',
        'permissions': 'true',
    }

    try:
        # Step 1: Authenticate to get the JWT token
        auth_response = requests.post(auth_url, headers=headers, data=data)
        if auth_response.status_code == 201:
            jwt_token = auth_response.text.strip()  # Extract and clean the token
            if jwt_token:
                # Log and create an event for the JWT token
                event = helper.new_event(
                    data=f"JWT Token: {jwt_token}"
                )
                ew.write_event(event)

                # Step 2: Generate the report ID
                report_id = generate_report_id(jwt_token, helper)
                if report_id:
                    # Log and create an event for the report ID
                    event = helper.new_event(
                        data=f"Report ID: {report_id}"
                    )
                    ew.write_event(event)

                    # Step 3: Download the report
                    file_path = download_report(jwt_token, report_id, helper)
                    if file_path:
                        helper.log_info(f"Report successfully downloaded to: {file_path}")
                    else:
                        raise ValueError("Failed to download the report.")
                else:
                    raise ValueError("Failed to generate report ID.")
            else:
                raise ValueError("JWT token not found in response.")
        else:
            raise ValueError(f"Failed to get JWT: {auth_response.status_code}, {auth_response.text}")
    except Exception as e:
        helper.log_error(f"Error in collect_events: {e}")

 
def generate_report_id(jwt_token, helper):
    url = "https://example_url"
    headers = {
        "accept": "application/json",
        "Authorization": f"Bearer {jwt_token}"
    }
    params = {
        "havingQuery": "isSecurity: true",
        "platform": "Windows"
    }
 
    try:
        response = requests.get(url, headers=headers, params=params)
        if response.status_code in (200, 201):
            report_data = response.json()
            report_id = report_data.get('reportId')
            if report_id:
                return report_id
            else:
                raise ValueError("Report ID not found in response.")
        else:
            raise ValueError(f"Failed to generate report ID: {response.status_code}, {response.text}")
    except Exception as e:
        helper.log_error(f"Error while generating report ID: {e}")
        raise ValueError(f"Error while generating report ID: {e}")

def download_report(jwt_token, report_id, helper):
    """
    Downloads the report using the JWT token and report ID.
    """

    url = f"https://example_url/{report_id}/download"
    headers = {
        "accept": "application/json",
        "Authorization": f"Bearer {jwt_token}",
    }

    try:
        # Make the request to download the report
        response = helper.send_http_request(url, method="GET", headers=headers, verify=True)
        
        if response.status_code in (200, 201):
            # Save the report content to a file
            sanitized_report_id = "".join(c if c.isalnum() else "_" for c in report_id)
            file_path = f"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_addon-builder\\local\\temp\\{sanitized_report_id}.csv.gz"
            
            with open(file_path, "wb") as file:
                file.write(response.content)
            
            helper.log_info(f"Report downloaded successfully to: {file_path}")
            return file_path
        else:
            raise ValueError(f"Failed to download report: {response.status_code}, {response.text}")
    except Exception as e:
        helper.log_error(f"Error while downloading report: {e}")
        raise ValueError(f"Error while downloading report: {e}")

 

 

 

 

Labels (1)
0 Karma

marnall
Motivator

I can't see any obvious issue with your code. What happens if you include debug log statements that output the report_id value, and then the resulting URL?

Assuming logging mode is set to debug:

    helper.log_debug(f"Report ID is: {report_id}")

    url = f"https://example_url/{report_id}/download"
    
    helper.log_debug(f"URL is: {url}")
    
    headers = {
        "accept": "application/json",
        "Authorization": f"Bearer {jwt_token}",
    }

 

0 Karma
Got questions? Get answers!

Join the Splunk Community Slack to learn, troubleshoot, and make connections with fellow Splunk practitioners in real time!

Meet up IRL or virtually!

Join Splunk User Groups to connect and learn in-person by region or remotely by topic or industry.

Get Updates on the Splunk Community!

Index This | What travels the world but is also stuck in place?

April 2026 Edition  Hayyy Splunk Education Enthusiasts and the Eternally Curious!   We’re back with this ...

Discover New Use Cases: Unlock Greater Value from Your Existing Splunk Data

Realizing the full potential of your Splunk investment requires more than just understanding current usage; it ...

Continue Your Journey: Join Session 2 of the Data Management and Federation Bootcamp ...

As data volumes continue to grow and environments become more distributed, managing and optimizing data ...