All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for your input. We have indeed used the mentioned add-on and were able to successfully retrieve data from Elasticsearch. However, it's important to note that the queries used are not writ... See more...
Thank you for your input. We have indeed used the mentioned add-on and were able to successfully retrieve data from Elasticsearch. However, it's important to note that the queries used are not written in Splunk’s native SPL language; instead, they rely on Elasticsearch queries. This limits the integration with some of Splunk’s core functionalities and does not provide the desired level of efficiency in terms of performance and deep analysis. We are currently looking for best practices and would prefer to adopt a solution that has been widely used over a long period without issues, offering better integration and higher performance with Splunk. If you have any proven experiences or reliable recommendations, we would appreciate you sharing them.
Technically, you might be able to. It might depend on your local limitations, chosen way of installing the software and so on. Technically if you bend over backwards you can even install multiple spl... See more...
Technically, you might be able to. It might depend on your local limitations, chosen way of installing the software and so on. Technically if you bend over backwards you can even install multiple splunk instances on one host. That doesn't mean you should. If you do so (I'm still advising against it), each instance will have its own set of inputs and outputs so if you - for example, just point your HF instance to indexers A and UF instance to indexers B, you will get _all_events from HF into indexers A (including _internal) and _all_ events from UF to indexers B. EDIT: I still don't see how it would solve your problems of sending logs from the "non-indexer" hosts to remote third party solution without sending them directly there...
Yes I agree, its very confusing but I think they mean not on the same host, as they will conflict, but for a distributed "deployment" you install app the apps but in different places.  Did this an... See more...
Yes I agree, its very confusing but I think they mean not on the same host, as they will conflict, but for a distributed "deployment" you install app the apps but in different places.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@PickleRick @livehybrid can i install Splunk UF in the SH, CM and LM, is it possible, also will it work? also will it cause duplicate logs from Splunk as well as from UF.
I think for distributed systems we have to install all: IA, TA, and the app.  I think when they say " Do not install Add-Ons and Apps on the same system" They mean not on a same host.
Hi @lrod99  The Conducive App for HL7 app isnt available for download directly from Splunkbase because it needs to be obtained directly from Conducive Consulting, which I believe will require a lice... See more...
Hi @lrod99  The Conducive App for HL7 app isnt available for download directly from Splunkbase because it needs to be obtained directly from Conducive Consulting, which I believe will require a license/support agreement with them. The only other Splunkbase HL7 app (HL7 Add-On for Splunk) was created by Joe Welsh who previously worked for Splunk but has now left the company, therefore it is unlikely that this will be updated, unless you reach out directly to Joe to see if this could be done? Are you looking for something to act as an endpoint/ingest or field extractions for an existing HL7 feed which you have?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Where can i get list of all outdated OS for my dashboard. Is there a site or something
Hi @livehybrid , I tried below by removing 2nd line, but nothing is being transmitted to splunk. As i mentioned , in the otel collector log, the body is getting printed correctly, somehow nothing... See more...
Hi @livehybrid , I tried below by removing 2nd line, but nothing is being transmitted to splunk. As i mentioned , in the otel collector log, the body is getting printed correctly, somehow nothing is being sent to splunk server. I see nothing in splunk with below change. processors: transform/logs: log_statements: - context: log statements: - set(body, ParseJSON(body)["message"])  
Also testing API on a API dev tool  indicated that we have to append "ApiToken" at the beginning of the key. Hopefully that is the way to enter it for the S1 App also.
@lrod99  The HL7 Add-On for Splunk was last updated to version 1.13 on July 16, 2020, as noted on Splunkbase. This version is compatible with Splunk Cloud, unlike earlier versions (e.g., 1.07–1.12),... See more...
@lrod99  The HL7 Add-On for Splunk was last updated to version 1.13 on July 16, 2020, as noted on Splunkbase. This version is compatible with Splunk Cloud, unlike earlier versions (e.g., 1.07–1.12), which were not. There is no publicly documented plan from the original developers to update the HL7 Add-On for Splunk as of May 21, 2025.   The Splunkbase pages for both the HL7 Add-On and Conducive App for HL7 do not mention upcoming releases or updates. The most recent activity dates back to 2020 for the HL7 Add-On and 2023 for the Conducive App, suggesting limited ongoing development.   You can reverse engineer the existing add-on using the steps provided, leveraging files from $SPLUNK_HOME/etc/apps/TA-HL7 and Add-on Builder to recreate it.    https://splunkbase.splunk.com/app/1750  https://splunkbase.splunk.com/app/3283 
Hi all, I have custom apps for alert action and data inputs built in add-on builder. I require to re-build it but I don't have the export or original source exported from original add-on builder use... See more...
Hi all, I have custom apps for alert action and data inputs built in add-on builder. I require to re-build it but I don't have the export or original source exported from original add-on builder used for the development. Is there a way to do reverse engineering it or any way how I can get the original source for re-build? Thanks in advance for the answer/suggestion.    Splunk Add-on Builder 
Yes, We have a lot of things manually created by users on the cluster, and we need to migrate some specific user content and ensure that the things created by users can be modified and deleted normal... See more...
Yes, We have a lot of things manually created by users on the cluster, and we need to migrate some specific user content and ensure that the things created by users can be modified and deleted normally. We need a long time to sort out the content that needs to be migrated, which is also the reason why I didn't want to do it at the beginning. But now this is the only way. 1. We will first sort out the default push configurations and user created configurations for the clusters that need to be migrated. 2. Copy the content that needs to be migrated to the deployer of the new cluster, and place the user created content in the local directory 3. Perform a full mode push once
Are there any plans to update this app?    
Hi @harishsplunk7  Just for anyone catching up, to confirm you specifically want to script the diag pushing, whilst this is available with --upload on the diag command it isnt possible to do this no... See more...
Hi @harishsplunk7  Just for anyone catching up, to confirm you specifically want to script the diag pushing, whilst this is available with --upload on the diag command it isnt possible to do this non-interactively because of the password request.  Ive been doing some more digging on this on my local instance, the CLI uses python's getpass to request your password for the support portal/splunk.com interactively and to my knowledge its not possible to pipe/inject into this using anything like stdin, however I did find the python calls which actually do the upload. Here is an example Python script which I believe may work for you, Ive not had chance to test it entirely yet, only in sections: import sys, os, glob sys.path.append("/opt/splunk/lib/python3.9/site-packages") from splunk.clilib import info_gather # Locate latest diag file in SPLUNK_HOME SPLUNK_HOME = os.environ.get("SPLUNK_HOME", "/opt/splunk") diag_files = sorted(glob.glob(os.path.join(SPLUNK_HOME, "diag-*.tar.gz"))) if not diag_files: raise FileNotFoundError("No diag file found") diag_file = diag_files[-1] class CustomOptions: def __init__(self, upload_user, upload_password, case_number, upload_description): self.upload_user = upload_user self.upload_password = upload_password self.case_id = case_number self.upload_description = upload_description self.upload_uri="https://api.splunk.com" options = CustomOptions( upload_user="your_username", upload_password="your_password", case_number="1234567", upload_description="Automated diag upload", ) result = info_gather.upload_to_splunkcom(diag_file, options) print("Upload result:", result)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Just command  splunk diag --upload... and some needed parameters  Upload: Flags to control uploading files Ex: splunk diag --upload [...] --case-number=case-number ... See more...
Just command  splunk diag --upload... and some needed parameters  Upload: Flags to control uploading files Ex: splunk diag --upload [...] --case-number=case-number Case number to attach to, e.g. 200500 --upload-user=UPLOAD_USER splunk.com username to use for uploading --upload-description=UPLOAD_DESCRIPTION description of file upload for Splunk support --firstchunk=chunk-number For resuming upload of a multi-part upload; select the first chunk to send --chunksize=chunk-size Optional set the chunksize in bytes to be uploaded These are described on above link. When you are doing upload it’s not needed to do on node where you have created that diag file. Just move it into any splunk enterprise node which have https access to splunk support over internet. If needed you can create script with any language you want to use, but as I already said, I probably use ansible for scripting. But it’s your decision based on your environment, needs and tools which you have.
Hi @harishsplunk7  What is your existing script doing? Perhaps we can help enhance this. Is there a specific reason you need it to be Python? Does your existing script get around the problem that t... See more...
Hi @harishsplunk7  What is your existing script doing? Perhaps we can help enhance this. Is there a specific reason you need it to be Python? Does your existing script get around the problem that the diag command with --upload flag requires you to interactively enter your password? Im not sure how we can get around this issue? Ultimately this activity could probably be repeated directly using the API that the diag upload CLI uses, however I am not sure if this information is publicly available.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
That’s true, but at least for me it’s really rare to create diag on UF and then send it splunk. But if you need to do it regularly then it’s different story. But in those case I probably do e.g. with ... See more...
That’s true, but at least for me it’s really rare to create diag on UF and then send it splunk. But if you need to do it regularly then it’s different story. But in those case I probably do e.g. with ansible play where I login into UF generate diag then copy that into full splunk instance and in last task I will send it to splunk with diag. Those was steps what I manually did on last time I need to send diag from UF to splunk.
Edit: deleted previous reply. Nevermind, Im sure it originally said UF  
In UF you should define those two different outputs groups. Then you just add into your inputs.conf in every inputs, which doesn’t use default output group _TCP_ROUTING = <your additional output gro... See more...
In UF you should define those two different outputs groups. Then you just add into your inputs.conf in every inputs, which doesn’t use default output group _TCP_ROUTING = <your additional output group> https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf If you need some routing based on content of events then you must add HF (heavy forwarder) after UF and then you can route it as @livehybrid shows.
Hi @Eric_Rak  Since you're getting timeout issues with curl rather than an SSL error it sounds like HEC isnt enabled. Please can you confirm if HEC has been enabled? Note: by default, HEC (HTTP Eve... See more...
Hi @Eric_Rak  Since you're getting timeout issues with curl rather than an SSL error it sounds like HEC isnt enabled. Please can you confirm if HEC has been enabled? Note: by default, HEC (HTTP Event Collector) is disabled and  uses its own SSL settings in inputs.conf, not server.conf. The [httpServer] stanza in server.conf only affects the management and web interfaces, not HEC. You can use the following to check - check for disabled = 0/false  $SPLUNK_HOME/bin/splunk btool inputs list http --debug Essentially you will need something like the following inputs.conf: [http] disabled = 0 enableSSL = true serverCert = <full path to your certificate chain pem file> sslPassword = <password for server key used in chain> Check out the following resources which might also assist: Setting up HEC: https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector https://community.splunk.com/t5/All-Apps-and-Add-ons/How-do-I-secure-the-event-collector-port-8088-with-an-ssl/m-p/243885 https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf#:~:text=12.%0A*%20Default%3A%2012-,serverCert,-%3D%20%3Cstring%3E%0A*%20See%20the  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing