All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@lrod99  The HL7 Add-On for Splunk was last updated to version 1.13 on July 16, 2020, as noted on Splunkbase. This version is compatible with Splunk Cloud, unlike earlier versions (e.g., 1.07–1.12),... See more...
@lrod99  The HL7 Add-On for Splunk was last updated to version 1.13 on July 16, 2020, as noted on Splunkbase. This version is compatible with Splunk Cloud, unlike earlier versions (e.g., 1.07–1.12), which were not. There is no publicly documented plan from the original developers to update the HL7 Add-On for Splunk as of May 21, 2025.   The Splunkbase pages for both the HL7 Add-On and Conducive App for HL7 do not mention upcoming releases or updates. The most recent activity dates back to 2020 for the HL7 Add-On and 2023 for the Conducive App, suggesting limited ongoing development.   You can reverse engineer the existing add-on using the steps provided, leveraging files from $SPLUNK_HOME/etc/apps/TA-HL7 and Add-on Builder to recreate it.    https://splunkbase.splunk.com/app/1750  https://splunkbase.splunk.com/app/3283 
Hi all, I have custom apps for alert action and data inputs built in add-on builder. I require to re-build it but I don't have the export or original source exported from original add-on builder use... See more...
Hi all, I have custom apps for alert action and data inputs built in add-on builder. I require to re-build it but I don't have the export or original source exported from original add-on builder used for the development. Is there a way to do reverse engineering it or any way how I can get the original source for re-build? Thanks in advance for the answer/suggestion.    Splunk Add-on Builder 
Yes, We have a lot of things manually created by users on the cluster, and we need to migrate some specific user content and ensure that the things created by users can be modified and deleted normal... See more...
Yes, We have a lot of things manually created by users on the cluster, and we need to migrate some specific user content and ensure that the things created by users can be modified and deleted normally. We need a long time to sort out the content that needs to be migrated, which is also the reason why I didn't want to do it at the beginning. But now this is the only way. 1. We will first sort out the default push configurations and user created configurations for the clusters that need to be migrated. 2. Copy the content that needs to be migrated to the deployer of the new cluster, and place the user created content in the local directory 3. Perform a full mode push once
Are there any plans to update this app?    
Hi @harishsplunk7  Just for anyone catching up, to confirm you specifically want to script the diag pushing, whilst this is available with --upload on the diag command it isnt possible to do this no... See more...
Hi @harishsplunk7  Just for anyone catching up, to confirm you specifically want to script the diag pushing, whilst this is available with --upload on the diag command it isnt possible to do this non-interactively because of the password request.  Ive been doing some more digging on this on my local instance, the CLI uses python's getpass to request your password for the support portal/splunk.com interactively and to my knowledge its not possible to pipe/inject into this using anything like stdin, however I did find the python calls which actually do the upload. Here is an example Python script which I believe may work for you, Ive not had chance to test it entirely yet, only in sections: import sys, os, glob sys.path.append("/opt/splunk/lib/python3.9/site-packages") from splunk.clilib import info_gather # Locate latest diag file in SPLUNK_HOME SPLUNK_HOME = os.environ.get("SPLUNK_HOME", "/opt/splunk") diag_files = sorted(glob.glob(os.path.join(SPLUNK_HOME, "diag-*.tar.gz"))) if not diag_files: raise FileNotFoundError("No diag file found") diag_file = diag_files[-1] class CustomOptions: def __init__(self, upload_user, upload_password, case_number, upload_description): self.upload_user = upload_user self.upload_password = upload_password self.case_id = case_number self.upload_description = upload_description self.upload_uri="https://api.splunk.com" options = CustomOptions( upload_user="your_username", upload_password="your_password", case_number="1234567", upload_description="Automated diag upload", ) result = info_gather.upload_to_splunkcom(diag_file, options) print("Upload result:", result)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Just command  splunk diag --upload... and some needed parameters  Upload: Flags to control uploading files Ex: splunk diag --upload [...] --case-number=case-number ... See more...
Just command  splunk diag --upload... and some needed parameters  Upload: Flags to control uploading files Ex: splunk diag --upload [...] --case-number=case-number Case number to attach to, e.g. 200500 --upload-user=UPLOAD_USER splunk.com username to use for uploading --upload-description=UPLOAD_DESCRIPTION description of file upload for Splunk support --firstchunk=chunk-number For resuming upload of a multi-part upload; select the first chunk to send --chunksize=chunk-size Optional set the chunksize in bytes to be uploaded These are described on above link. When you are doing upload it’s not needed to do on node where you have created that diag file. Just move it into any splunk enterprise node which have https access to splunk support over internet. If needed you can create script with any language you want to use, but as I already said, I probably use ansible for scripting. But it’s your decision based on your environment, needs and tools which you have.
Hi @harishsplunk7  What is your existing script doing? Perhaps we can help enhance this. Is there a specific reason you need it to be Python? Does your existing script get around the problem that t... See more...
Hi @harishsplunk7  What is your existing script doing? Perhaps we can help enhance this. Is there a specific reason you need it to be Python? Does your existing script get around the problem that the diag command with --upload flag requires you to interactively enter your password? Im not sure how we can get around this issue? Ultimately this activity could probably be repeated directly using the API that the diag upload CLI uses, however I am not sure if this information is publicly available.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
That’s true, but at least for me it’s really rare to create diag on UF and then send it splunk. But if you need to do it regularly then it’s different story. But in those case I probably do e.g. with ... See more...
That’s true, but at least for me it’s really rare to create diag on UF and then send it splunk. But if you need to do it regularly then it’s different story. But in those case I probably do e.g. with ansible play where I login into UF generate diag then copy that into full splunk instance and in last task I will send it to splunk with diag. Those was steps what I manually did on last time I need to send diag from UF to splunk.
Edit: deleted previous reply. Nevermind, Im sure it originally said UF  
In UF you should define those two different outputs groups. Then you just add into your inputs.conf in every inputs, which doesn’t use default output group _TCP_ROUTING = <your additional output gro... See more...
In UF you should define those two different outputs groups. Then you just add into your inputs.conf in every inputs, which doesn’t use default output group _TCP_ROUTING = <your additional output group> https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf If you need some routing based on content of events then you must add HF (heavy forwarder) after UF and then you can route it as @livehybrid shows.
Hi @Eric_Rak  Since you're getting timeout issues with curl rather than an SSL error it sounds like HEC isnt enabled. Please can you confirm if HEC has been enabled? Note: by default, HEC (HTTP Eve... See more...
Hi @Eric_Rak  Since you're getting timeout issues with curl rather than an SSL error it sounds like HEC isnt enabled. Please can you confirm if HEC has been enabled? Note: by default, HEC (HTTP Event Collector) is disabled and  uses its own SSL settings in inputs.conf, not server.conf. The [httpServer] stanza in server.conf only affects the management and web interfaces, not HEC. You can use the following to check - check for disabled = 0/false  $SPLUNK_HOME/bin/splunk btool inputs list http --debug Essentially you will need something like the following inputs.conf: [http] disabled = 0 enableSSL = true serverCert = <full path to your certificate chain pem file> sslPassword = <password for server key used in chain> Check out the following resources which might also assist: Setting up HEC: https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector https://community.splunk.com/t5/All-Apps-and-Add-ons/How-do-I-secure-the-event-collector-port-8088-with-an-ssl/m-p/243885 https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf#:~:text=12.%0A*%20Default%3A%2012-,serverCert,-%3D%20%3Cstring%3E%0A*%20See%20the  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
thank you, can you please let me know the python script to upload the diag file to splunk support 
Environment: Splunk Enterprise 9.x (Windows, On-Prem) Domain: mydomain.duckdns.org (via DuckDNS) Certbot for Let’s Encrypt certificate generation Goal: Use the correct Certbot CLI comm... See more...
Environment: Splunk Enterprise 9.x (Windows, On-Prem) Domain: mydomain.duckdns.org (via DuckDNS) Certbot for Let’s Encrypt certificate generation Goal: Use the correct Certbot CLI command to generate certificates for Splunk HEC. Resolve curl: (28) Connection timed out when testing HTTPS. Specific Issues: 1. Certbot CLI and Certificate Handling The Let’s Encrypt README warns against copying/moving certificates, but Splunk requires specific paths. Question: What is the exact Certbot command to generate certificates for Splunk HEC on Windows? Should I copy fullchain.pem and privkey.pem to Splunk’s auth/certs directory despite the warnings? 2. HTTPS Curl Failure After configuring SSL in server.conf, curl times out:     Copy   Download curl -k -v "https://localhost:8088/services/collector" -H "Authorization: Splunk <HEC_TOKEN>" * Connection timed out after 4518953 milliseconds Question: Why does curl timeout even after enabling SSL in Splunk? Is localhost:8088 valid for testing, or must I use mydomain.duckdns.org:8088? Steps Taken: Generated certificates with certbot certonly --standalone -d mydomain.duckdns.org. Copied fullchain.pem and privkey.pem to $SPLUNK_HOME/etc/auth/certs. Configured server.conf: ini   Copy   Download [httpServer] enableSSL = true sslCertPath = $SPLUNK_HOME/etc/auth/certs/fullchain.pem sslKeyPath = $SPLUNK_HOME/etc/auth/certs/privkey.pem Confirmed port 8088 is open in Windows Firewall.
I think that just couple of lines sh script is enough as diag already have option to send and attach it to your case in splunk. You found more from https://docs.splunk.com/Documentation/Splunk/9.4.2/... See more...
I think that just couple of lines sh script is enough as diag already have option to send and attach it to your case in splunk. You found more from https://docs.splunk.com/Documentation/Splunk/9.4.2/Troubleshooting/Generateadiag
we are have multiple splunk cluster and will need to generate a diag file everytime for search head or indexer..  so need to automat the process of generating the diage and upload in splunk support ... See more...
we are have multiple splunk cluster and will need to generate a diag file everytime for search head or indexer..  so need to automat the process of generating the diage and upload in splunk support case automatically.  i have script to generate a file and enter the case but spplunk support is will need api or some connection to login and search the case and upload the diag.   
Hi @vempatisuresh  Applying two set(body, ...) statements sequentially results in only the last one ("extracted") being set as the body. This eliminates your intended transformation. Try the foll... See more...
Hi @vempatisuresh  Applying two set(body, ...) statements sequentially results in only the last one ("extracted") being set as the body. This eliminates your intended transformation. Try the following: processors: transform/logs: log_statements: - context: log statements: - set(body, ParseJSON(body)["message"]) Pipeline inclusion: service: pipelines: logs: processors: [transform/logs, ...] For more info check out the docs at https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/transformprocessor/README.md  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @harishsplunk7  I wanted to check, are you using Windows or Linux UFs? UFs do not have Python installed as part of the Splunk deployment, therefore Python might not be best approach for this? ... See more...
Hi @harishsplunk7  I wanted to check, are you using Windows or Linux UFs? UFs do not have Python installed as part of the Splunk deployment, therefore Python might not be best approach for this?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @krishna821  Most of the REST API endpoints you're likely using for on-premise are also available in Cloud. The SplunkCloud REST API docs are at https://docs.splunk.com/Documentation/SplunkCloud... See more...
Hi @krishna821  Most of the REST API endpoints you're likely using for on-premise are also available in Cloud. The SplunkCloud REST API docs are at https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTprolog  You will need to ensure your egress IP is allow-listed on your Splunk Cloud environment as by default this is restricted. If you are not an admin on the Splunk Cloud platform then you will need to speak to your admin team to setup the allow-listing. For more information check out https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/Config/ConfigureIPAllowList Note: I would recommend using Token authentication over user/password login. If your Splunk Cloud instance is using SAML/SSO authentication then you will need to use a token.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
For the "auxiliary" servers (although CM is very important for cluster operations) the sizing hugely depends on a scale. You can have a TB-sized environment which still serves only a few dozens of UF... See more...
For the "auxiliary" servers (although CM is very important for cluster operations) the sizing hugely depends on a scale. You can have a TB-sized environment which still serves only a few dozens of UFs from DS so you can make this DS really small (6CPU would suffice; I've seen such environments) but you could as well have several thousands of UFs pulling from DS. Anyway, with DS you can significantly lower the server's load by increasing the polling period at the cost of increased "latency" of changes to deployed apps. CM also grows with the size of your environment. TB/day scale is still relatively moderate so it shouldn't need 24vCPUs for that. 
Thanks, I'll look into this and confirm the behavior.