All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Sidpet  Have you configured the playbook to output the fields you are interested in seeing?  Check out https://docs.splunk.com/Documentation/SOAR/current/Playbook/CreatePlaybooks#:~:text=constr... See more...
Hi @Sidpet  Have you configured the playbook to output the fields you are interested in seeing?  Check out https://docs.splunk.com/Documentation/SOAR/current/Playbook/CreatePlaybooks#:~:text=constructing%20your%20playbook.-,Add%20outputs%20to%20your%20playbooks,-You%20can%20add for more info on how to Add outputs to your playbooks.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have playbook that validates a url given and assigns scores to it. I am able to run the playbook successfully but do not see the output. where do I see it in the crowdstrike app ? I am new here and... See more...
I have playbook that validates a url given and assigns scores to it. I am able to run the playbook successfully but do not see the output. where do I see it in the crowdstrike app ? I am new here and trying to learn SOAR.
Years later, same question. It seems to be not possible to configure custom http headers. It's mandatory for us to consume a Threat Intelligence Feed where basic auth is not supported.  Is there a... See more...
Years later, same question. It seems to be not possible to configure custom http headers. It's mandatory for us to consume a Threat Intelligence Feed where basic auth is not supported.  Is there a different way to get this issue solved somehow?
@livehybrid  : Sorry, I mentioned it wrong here. It is a splunk standalone server. Yes , I am testing locally. When I remove  observability_admin_TA_rh_account.py and restmap.conf file the app is ... See more...
@livehybrid  : Sorry, I mentioned it wrong here. It is a splunk standalone server. Yes , I am testing locally. When I remove  observability_admin_TA_rh_account.py and restmap.conf file the app is working fine.I can see it under datainputs. So I am guessing is it something wrong with these two files.  Regards, PNV
Hi, how is this macro set up updated?
Hi @Poojitha  I assume you are using UCC Framework for this app?  Are you able to see the inputs in https://yourSplunkEnvironment.com/en-US/app/yourApp/inputs ? Have you been able to test the app ... See more...
Hi @Poojitha  I assume you are using UCC Framework for this app?  Are you able to see the inputs in https://yourSplunkEnvironment.com/en-US/app/yourApp/inputs ? Have you been able to test the app locally? I would highly recommend doing some local verification before packaging the app for uploading to Splunk Cloud.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi All,   I am trying to create a modular input in splunk cloud that gets splunk observability metadata. Input has fields : realm, token and object_type. I can see the input form under "Settings ->... See more...
Hi All,   I am trying to create a modular input in splunk cloud that gets splunk observability metadata. Input has fields : realm, token and object_type. I can see the input form under "Settings -> Data Inputs" with all these fields and is fine.   I have created another script that masks token. But when I upgrade the app with this script and restmap.conf, the modular input is getting disappered from "Settings" --> "Data Inputs" .   splunk_observability.py ---> this is to create modular input schema. observability_object_helper.py --> This is helper script that makes ARI call. observability_admin_TA_rh_account.py --> This creates restmodel and encrypts token input field.   Directory Structure :  App Folder Name --> Splunk_Observability_Metadata metadata --> default.meta bin --> import_declare_test.py, splunk_observability.py,observability_object_helper.py,observability_admin_TA_rh_account.py README --> inputs.conf.spec local --> inputs.conf, app.conf lib --> required splunklib splunk_observability.py import import_declare_test import sys,os import json from splunklib import modularinput as smi sys.path.insert(0, os.path.dirname(__file__)) from observability_object_helper import stream_events, validate_input class SPLUNK_OBSERVABILITY(smi.Script): def __init__(self): super(SPLUNK_OBSERVABILITY, self).__init__() def get_scheme(self): scheme = smi.Scheme("Splunk Observability Metadata Input") scheme.use_external_validation = False scheme.use_single_instance = False scheme.description = "Modular Input" scheme.streaming_mode_xml = True scheme.add_argument( smi.Argument( 'realm', required_on_create=True ) ) scheme.add_argument( smi.Argument( 'name', required_on_create=True, description="Name should not contain whitespaces", ) ) scheme.add_argument( smi.Argument( 'token', required_on_create=True, description="Add API Key required to connect to Splunk Observability Cloud", ) ) scheme.add_argument( smi.Argument( 'object_type', required_on_create=True ) ) return scheme def validate_input(self, definition: smi.ValidationDefinition): return validate_input(definition) def stream_events(self, inputs: smi.InputDefinition, ew: smi.EventWriter): return stream_events(inputs, ew) if __name__ == "__main__": sys.exit(SPLUNK_OBSERVABILITY().run(sys.argv)) observability_object_helper.py import json import logging import time import requests # import import_declare_test from solnlib import conf_manager, log, credentials from splunklib import modularinput as smi ADDON_NAME = "splunk_observability" def get_key_name(input_name: str) -> str: # `input_name` is a string like "example://<input_name>". return input_name.split("/")[-1] def logger_for_input(input_name: str) -> logging.Logger: return log.Logs().get_logger(f"{ADDON_NAME.lower()}_{input_name}") def splunk_observability_get_endpoint(type, realm): BASE_URL = f"https://api.{realm}.signalfx.com" ENDPOINT = "" types = { "chart": f"{BASE_URL}/v2/chart", "dashboard": f"{BASE_URL}/v2/dashboard", "detector": f"{BASE_URL}/v2/detector", "heartbeat": f"{BASE_URL}/v2/detector", "synthetic": f"{BASE_URL}/v2/synthetics/tests", } for type_key in types: if type.lower() == type_key.lower(): ENDPOINT = types.get(type_key) return ENDPOINT def splunk_observability_get_sourcetype(type): sourcetypes = { "chart": "observability:chart_api:json", "dashboard": "observability:dashboard_api:json", "detector": "observability:detector_api:json", "synthetic": "observability:synthetic_api:json", "token": "observability:token_api:json", } for type_key in sourcetypes: if type.lower() == type_key.lower(): return sourcetypes.get(type_key) def splunk_observability_get_objects(type, realm, token, logger): TOKEN = token ENDPOINT_URL = splunk_observability_get_endpoint(type, realm) limit = 200 offset = 0 pagenation = True headers = {"Content-Type": "application/json", "X-SF-TOKEN": TOKEN} processStart = time.time() objects = [] while pagenation: params = {"limit": limit, "offset": offset} try: response = requests.get(ENDPOINT_URL, headers=headers, params=params) response.raise_for_status() except requests.exceptions.RequestException as e: log.log_exception(logger, e, "RequestError", msg_before="Error fetching data:") return [] data = response.json() if isinstance(data, list): results = data elif isinstance(data, dict): results = data.get("results", []) else: logger.error("Unexpected response format") objects.extend(results) logger.info(f"pagenating {type} result 'length': {len(results)} , offset: {offset}, limit {limit}") if len(results) < limit: pagenation = False # too many objects to query, splunk will max out at 10,000 elif (offset >= 10000-limit): pagenation = False logger.warn("Cannot ingest more than 10,000 objects") else: offset += limit count = offset+len(results) timeTakenProcess = str(round((time.time() - processStart) * 1000)) log.log_event(logger, {"message": f"{type} ingest finished", "time_taken": f"{timeTakenProcess}ms", "ingested": count}) return objects def splunk_observability_get_objects_synthetics(type, realm, token, logger): processStart = time.time() BASE_URL = f"https://api.{realm}.signalfx.com" ENDPOINT_URL = f"{BASE_URL}/v2/synthetics/tests" page = 1 pagenating = True headers = {"Content-Type": "application/json", "X-SF-TOKEN": token} synthetics_objects = [] while pagenating: params = {"perPage": 100, "page": page} try: response = requests.get(ENDPOINT_URL, headers=headers, params=params) response.raise_for_status() except requests.exceptions.RequestException as e: log.log_exception(logger, e, "RequestError", msg_before="Error fetching synthetic data:") return [] data = response.json() tests = data["tests"] for test in tests: synthetic = {"id": test["id"], "type": test["type"]} SYNTHETIC_TYPE = synthetic["type"] SYNTHETIC_ID = synthetic["id"] detail_url = f"{BASE_URL}/v2/synthetics/tests/{SYNTHETIC_TYPE}/{SYNTHETIC_ID}" if type=="synthetic_detailed": try: detail_response = requests.get(detail_url, headers=headers) detail_response.raise_for_status() synthetics_objects.append(detail_response.json()) except requests.exceptions.RequestException as e: log.log_exception(logger, e, "RequestError", msg_before=f"Error fetching synthetic details for ID: {SYNTHETIC_ID}") else: synthetics_objects.append(test) pagenating = data.get("nextPageLink") is not None page += 1 timeTakenProcess = str(round((time.time() - processStart) * 1000)) log.log_event(logger, {"message": "synthetic ingest finished", "time_taken": f"{timeTakenProcess}ms", "ingested": len(synthetics_objects)}) return synthetics_objects def validate_input(definition: smi.ValidationDefinition): return False def stream_events(inputs: smi.InputDefinition, event_writer: smi.EventWriter): for input_name, input_item in inputs.inputs.items(): normalized_input_name = input_name.split("/")[-1] logger = logger_for_input(normalized_input_name) try: observability_type = input_item.get("object_type") observability_token = input_item.get("token") observability_realm = input_item.get("realm") log.modular_input_start(logger, normalized_input_name) if observability_type.lower() == "synthetic": objects = splunk_observability_get_objects_synthetics(observability_type, observability_realm, observability_token, logger) else: objects = splunk_observability_get_objects(observability_type, observability_realm, observability_token, logger) # source_type = splunk_observability_get_sourcetype(observability_type) for obj in objects: logger.debug(f"DEBUG EVENT {observability_type} :{json.dumps(obj)}") event_writer.write_event( smi.Event( data=json.dumps(obj, ensure_ascii=False, default=str), index=input_item.get("index"), sourcetype=input_item.get("sourcetype"), ) ) log.events_ingested(logger, input_name, sourcetype, len(objects), input_item.get("index")) log.modular_input_end(logger, normalized_input_name) except Exception as e: log.log_exception(logger, e, "IngestionError", msg_before="Error processing input:") ​ observability_admin_TA_rh_account.py from splunktaucclib.rest_handler.endpoint import ( field, validator, RestModel, DataInputModel, ) from splunktaucclib.rest_handler import admin_external, util import logging util.remove_http_proxy_env_vars() fields = [ field.RestField('name', required=True, encrypted=False), field.RestField('realm', required=True, encrypted=False), field.RestField('token', required=True, encrypted=True), field.RestField('interval', required=True, encrypted=False, default="300"), ] model = RestModel(fields, name='splunk_observability') endpoint = DataInputModel(model, input_type='splunk_observability') if __name__ == '__main__': logging.getLogger().addHandler(logging.NullHandler()) admin_external.handle(endpoint) restmap.conf [endpoint:admin/input/splunk_observability] match = splunk_observability python.version = python3 handlerfile = observability_admin_TA_rh_account.py Please help me to resolve this issue. Thanks, PNV
Are these changes still relevant as of 2025?  
Hi @hikan  You are right in that the download for Splunk Enterprise Security on Splunkbase is restricted, only users which have been assigned an ES license are added to the entitlement for downloadi... See more...
Hi @hikan  You are right in that the download for Splunk Enterprise Security on Splunkbase is restricted, only users which have been assigned an ES license are added to the entitlement for downloading this app.  Please reach out to your Splunk account team who should be able to arrange the appropriate license & access for you to conduct a proof-of-concept of Enterprise Security. If you do not have, or do not know your account team then I would put a request in via https://www.splunk.com/en_us/talk-to-sales.html In the meantime it might be worth looking at the the Security Essentials and the ES Content Update Splunkbase apps or the http://research.splunk.com/ website.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, We are using Splunk Enterprise on-premise. Now, I launched another one with a trial license and I would like to test Security features. However the app download is restricted unfortunately. ... See more...
Hi, We are using Splunk Enterprise on-premise. Now, I launched another one with a trial license and I would like to test Security features. However the app download is restricted unfortunately. Can I have the download permission onto the test Splunk? Thanks, Regards, hikan
Hi @_KD  I dont have specific details on future product roadmap or development timelines for the Splunk Universal Forwarder Docker image regarding OpenShift compatibility or sidecar usage. The reco... See more...
Hi @_KD  I dont have specific details on future product roadmap or development timelines for the Splunk Universal Forwarder Docker image regarding OpenShift compatibility or sidecar usage. The recommended approach for collecting data from Kubernetes and OpenShift environments is the official Splunk OpenTelemetry Collector project, which is designed to integrate with these platforms and their security models. If the specific use case of running the Universal Forwarder Docker image as a sidecar is critical for your needs, we encourage you to provide this feedback through your Splunk account team or official support channels such as via https://www.splunk.com/support  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @talente  How long has it been since you requested the lab? Sometimes these take 10-15 minutes or more to start up. Is the lab URL on a specific port (e.g. 8000) and if so, can you access that p... See more...
Hi @talente  How long has it been since you requested the lab? Sometimes these take 10-15 minutes or more to start up. Is the lab URL on a specific port (e.g. 8000) and if so, can you access that port for other sites from your network? e.g. try http://portquiz.net:8000/ Which lab is it you are working on?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi I tried to set up the splunk lab but the url for instance is not working 
Hi @dompico  I assume that this is installed on a heavy forwarder within your environment? Please can you confirm how you've installed the app? It looks like the app is looking for authhosts.conf wh... See more...
Hi @dompico  I assume that this is installed on a heavy forwarder within your environment? Please can you confirm how you've installed the app? It looks like the app is looking for authhosts.conf which it cannot find.  The app doesnt ship with this file, so I presume its generated as part of the modular input when it runs.  Are there any other errors before this error relating to the retrieval of content from S1 that might be used to populate this conf file? Theres a similar thread at https://community.splunk.com/t5/All-Apps-and-Add-ons/sentinelone-app-no-longer-able-to-connect-to-sentinelone/m-p/692354  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
On the bug fix for this issue, Splunk Support have come back with the following ... Observation & Findings: Thanks for flagging this issue with us and we taken this to the development team. We i... See more...
On the bug fix for this issue, Splunk Support have come back with the following ... Observation & Findings: Thanks for flagging this issue with us and we taken this to the development team. We informed you that our development team is having high level discussions on the xpath command whether to deprecate it or enhance it. Once the xpath enhancement or deprecation is done, it will be updated in the official documentation. As this task will undergo through some pre-checks, post-checks and some approvals which might take some time. So workarounds are the only option, for now. Here's a more generic regex to extract different sorts of XML declarations (note, removes CDATA entries too) | ... ``` example: https://regex101.com/r/BqHeX4/3 ``` | eval xml=replace(_raw, "(?s)(\<[\?\!]([^\\>]+\>).+?)*(?=\<[^(?=\/)])(?=[a-zA-Z])*", "") | rex mode=sed field=_raw "s/(?s)(\<[\?\!]([^\\>]+\>).+?)*(?=\<[^(?=\/)])(?=[a-zA-Z])*//g" ``` sed example for a props.conf SEDCMD to remove XML declarations before indexing ``` | xpath ...   Finally, there is another bug (Splunk said they are aware) with the xpath command when it is used more than once.  Any existing multi-value fields become non multi-value fields (like a nomv command has been applied) so any mv manipulations should be done before subsequent xpath commands. 
Hello, I'm trying to get SentinelOne data into my cloud instance but I'm getting errors similar to this related to the inputs. At first I was having an issue with authentication errors using the API... See more...
Hello, I'm trying to get SentinelOne data into my cloud instance but I'm getting errors similar to this related to the inputs. At first I was having an issue with authentication errors using the API. I believe that's resolved after regenerating the key, because these are the only logs I can see in the index I created for S1. error_message="[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/********?output_mode=json" error_type="&lt;class 'splunk.ResourceNotFound'&gt;" error_arguments="[HTTP 404] https://127.0.0.1:8089/servicesNS/nobody/sentinelone_app_for_splunk/configs/conf-authhosts/***********?output_mode=json" error_filename="s1_client.py" error_line_number="162" input_guid="*****************" input_name="Threats"
The splunk universal forwarder image currently is not compatible with OpenShift. The image architecture requires the usage of sudo to switch between users and to run as specific UIDs which are not co... See more...
The splunk universal forwarder image currently is not compatible with OpenShift. The image architecture requires the usage of sudo to switch between users and to run as specific UIDs which are not compatible with OpenShift UIDs. Are you planning to ever fix your image to make it compatible with OpenShift to run as a sidecar container?
Thank you for the quick and helpful reply. I figured that was probably the answer. In the meantime I'm working with the data owner at the origin to see if they can mitigate the issue on their end. Cl... See more...
Thank you for the quick and helpful reply. I figured that was probably the answer. In the meantime I'm working with the data owner at the origin to see if they can mitigate the issue on their end. Clearly something isn't right on the Azure client side and that'll need to be fixed. 
Hi @gazoscreek  No, the blacklist parameter in inputs.conf is not applicable for filtering event content collected by the Splunk_TA_microsoft-cloudservices add-on. The blacklist parameter is used f... See more...
Hi @gazoscreek  No, the blacklist parameter in inputs.conf is not applicable for filtering event content collected by the Splunk_TA_microsoft-cloudservices add-on. The blacklist parameter is used for file-based inputs (monitor, batch) to exclude files or directories based on their path. The Splunk_TA_microsoft-cloudservices collects data via APIs, not from files. I believe you're stuck with the Index time parsing option which you are already looking at. Would you be able to share you config for this? We may be able to find some performance improvements which might help? Also, what is your architecture like? If there is too much pressure on your HF to do these parsings then are there other Intermediary forwarders that you could do it on, or perhaps even the indexers? This falls into the "it depends" category a little as I dont have all the info, but there may be some options out there. Regarding the ingest_eval on another instance after the data has already been parsed on your HF, you can use RULESET- props.conf settings to call transforms - this is what Ingest Actions does to achieve transfoms on already parsed data.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Wondering if there's a blacklist parameter I can add to one of my Azure inputs so that Splunk will ignore pulling the event across the WAN. I already have a working ingest action, but the amount of d... See more...
Wondering if there's a blacklist parameter I can add to one of my Azure inputs so that Splunk will ignore pulling the event across the WAN. I already have a working ingest action, but the amount of data that's coming across is causing memory issues on my forwarder ... my working ingest action is this ... NETWORKSECURITYGROUPS\\/NSGBLAHBLAH.*?(IP\\.IP\\.IP\\.IP|IP\\.IP\\.IP\\.IP) but is there an inputs.conf parameter I can set to this regex so that the data will be ignored at the source.