All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That’s true, but at least for me it’s really rare to create diag on UF and then send it splunk. But if you need to do it regularly then it’s different story. But in those case I probably do e.g. with ... See more...
That’s true, but at least for me it’s really rare to create diag on UF and then send it splunk. But if you need to do it regularly then it’s different story. But in those case I probably do e.g. with ansible play where I login into UF generate diag then copy that into full splunk instance and in last task I will send it to splunk with diag. Those was steps what I manually did on last time I need to send diag from UF to splunk.
Edit: deleted previous reply. Nevermind, Im sure it originally said UF  
In UF you should define those two different outputs groups. Then you just add into your inputs.conf in every inputs, which doesn’t use default output group _TCP_ROUTING = <your additional output gro... See more...
In UF you should define those two different outputs groups. Then you just add into your inputs.conf in every inputs, which doesn’t use default output group _TCP_ROUTING = <your additional output group> https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf If you need some routing based on content of events then you must add HF (heavy forwarder) after UF and then you can route it as @livehybrid shows.
Hi @Eric_Rak  Since you're getting timeout issues with curl rather than an SSL error it sounds like HEC isnt enabled. Please can you confirm if HEC has been enabled? Note: by default, HEC (HTTP Eve... See more...
Hi @Eric_Rak  Since you're getting timeout issues with curl rather than an SSL error it sounds like HEC isnt enabled. Please can you confirm if HEC has been enabled? Note: by default, HEC (HTTP Event Collector) is disabled and  uses its own SSL settings in inputs.conf, not server.conf. The [httpServer] stanza in server.conf only affects the management and web interfaces, not HEC. You can use the following to check - check for disabled = 0/false  $SPLUNK_HOME/bin/splunk btool inputs list http --debug Essentially you will need something like the following inputs.conf: [http] disabled = 0 enableSSL = true serverCert = <full path to your certificate chain pem file> sslPassword = <password for server key used in chain> Check out the following resources which might also assist: Setting up HEC: https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector https://community.splunk.com/t5/All-Apps-and-Add-ons/How-do-I-secure-the-event-collector-port-8088-with-an-ssl/m-p/243885 https://docs.splunk.com/Documentation/Splunk/latest/admin/Inputsconf#:~:text=12.%0A*%20Default%3A%2012-,serverCert,-%3D%20%3Cstring%3E%0A*%20See%20the  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
thank you, can you please let me know the python script to upload the diag file to splunk support 
Environment: Splunk Enterprise 9.x (Windows, On-Prem) Domain: mydomain.duckdns.org (via DuckDNS) Certbot for Let’s Encrypt certificate generation Goal: Use the correct Certbot CLI comm... See more...
Environment: Splunk Enterprise 9.x (Windows, On-Prem) Domain: mydomain.duckdns.org (via DuckDNS) Certbot for Let’s Encrypt certificate generation Goal: Use the correct Certbot CLI command to generate certificates for Splunk HEC. Resolve curl: (28) Connection timed out when testing HTTPS. Specific Issues: 1. Certbot CLI and Certificate Handling The Let’s Encrypt README warns against copying/moving certificates, but Splunk requires specific paths. Question: What is the exact Certbot command to generate certificates for Splunk HEC on Windows? Should I copy fullchain.pem and privkey.pem to Splunk’s auth/certs directory despite the warnings? 2. HTTPS Curl Failure After configuring SSL in server.conf, curl times out:     Copy   Download curl -k -v "https://localhost:8088/services/collector" -H "Authorization: Splunk <HEC_TOKEN>" * Connection timed out after 4518953 milliseconds Question: Why does curl timeout even after enabling SSL in Splunk? Is localhost:8088 valid for testing, or must I use mydomain.duckdns.org:8088? Steps Taken: Generated certificates with certbot certonly --standalone -d mydomain.duckdns.org. Copied fullchain.pem and privkey.pem to $SPLUNK_HOME/etc/auth/certs. Configured server.conf: ini   Copy   Download [httpServer] enableSSL = true sslCertPath = $SPLUNK_HOME/etc/auth/certs/fullchain.pem sslKeyPath = $SPLUNK_HOME/etc/auth/certs/privkey.pem Confirmed port 8088 is open in Windows Firewall.
I think that just couple of lines sh script is enough as diag already have option to send and attach it to your case in splunk. You found more from https://docs.splunk.com/Documentation/Splunk/9.4.2/... See more...
I think that just couple of lines sh script is enough as diag already have option to send and attach it to your case in splunk. You found more from https://docs.splunk.com/Documentation/Splunk/9.4.2/Troubleshooting/Generateadiag
we are have multiple splunk cluster and will need to generate a diag file everytime for search head or indexer..  so need to automat the process of generating the diage and upload in splunk support ... See more...
we are have multiple splunk cluster and will need to generate a diag file everytime for search head or indexer..  so need to automat the process of generating the diage and upload in splunk support case automatically.  i have script to generate a file and enter the case but spplunk support is will need api or some connection to login and search the case and upload the diag.   
Hi @vempatisuresh  Applying two set(body, ...) statements sequentially results in only the last one ("extracted") being set as the body. This eliminates your intended transformation. Try the foll... See more...
Hi @vempatisuresh  Applying two set(body, ...) statements sequentially results in only the last one ("extracted") being set as the body. This eliminates your intended transformation. Try the following: processors: transform/logs: log_statements: - context: log statements: - set(body, ParseJSON(body)["message"]) Pipeline inclusion: service: pipelines: logs: processors: [transform/logs, ...] For more info check out the docs at https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/transformprocessor/README.md  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @harishsplunk7  I wanted to check, are you using Windows or Linux UFs? UFs do not have Python installed as part of the Splunk deployment, therefore Python might not be best approach for this? ... See more...
Hi @harishsplunk7  I wanted to check, are you using Windows or Linux UFs? UFs do not have Python installed as part of the Splunk deployment, therefore Python might not be best approach for this?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @krishna821  Most of the REST API endpoints you're likely using for on-premise are also available in Cloud. The SplunkCloud REST API docs are at https://docs.splunk.com/Documentation/SplunkCloud... See more...
Hi @krishna821  Most of the REST API endpoints you're likely using for on-premise are also available in Cloud. The SplunkCloud REST API docs are at https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTREF/RESTprolog  You will need to ensure your egress IP is allow-listed on your Splunk Cloud environment as by default this is restricted. If you are not an admin on the Splunk Cloud platform then you will need to speak to your admin team to setup the allow-listing. For more information check out https://docs.splunk.com/Documentation/SplunkCloud/9.3.2411/Config/ConfigureIPAllowList Note: I would recommend using Token authentication over user/password login. If your Splunk Cloud instance is using SAML/SSO authentication then you will need to use a token.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
For the "auxiliary" servers (although CM is very important for cluster operations) the sizing hugely depends on a scale. You can have a TB-sized environment which still serves only a few dozens of UF... See more...
For the "auxiliary" servers (although CM is very important for cluster operations) the sizing hugely depends on a scale. You can have a TB-sized environment which still serves only a few dozens of UFs from DS so you can make this DS really small (6CPU would suffice; I've seen such environments) but you could as well have several thousands of UFs pulling from DS. Anyway, with DS you can significantly lower the server's load by increasing the polling period at the cost of increased "latency" of changes to deployed apps. CM also grows with the size of your environment. TB/day scale is still relatively moderate so it shouldn't need 24vCPUs for that. 
Thanks, I'll look into this and confirm the behavior.
In fact, if its specific data sources which you want to send to different places then you wont need to touch props/transforms - instead you can set _TCP_ROUTING in your inputs.conf stanzas, setting t... See more...
In fact, if its specific data sources which you want to send to different places then you wont need to touch props/transforms - instead you can set _TCP_ROUTING in your inputs.conf stanzas, setting the value to be the output group that you want to use, for example: == inputs.conf == [monitor:///some/path/someFile.log] index=someIndex sourcetype=myAppLogs _TCP_ROUTING=myOnPremOutputGroup [monitor:///some/path/IIS/logs] index=iis_logs sourcetype=iis:logs _TCP_ROUTING=mySplunkCloudOutputGroup Also worth reading https://community.splunk.com/t5/Getting-Data-In/Issue-with-default-outputs-when-TCP-ROUTING/m-p/509716  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Flobzh  Yes you can achieve this with multiple output groups in your outputs.conf and then props/transforms.conf to filter as required. For more details documentation and examples check out htt... See more...
Hi @Flobzh  Yes you can achieve this with multiple output groups in your outputs.conf and then props/transforms.conf to filter as required. For more details documentation and examples check out https://docs.splunk.com/Documentation/SplunkCloud/latest/Forwarding/Routeandfilterdatad  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Luckily, we didn't have to tackle this one. It was way more important to us to move the content created than to remove it later.
Hey,  Thank you for assisting. 1PB is incredible. I though 1.7 TB was a lot. I am not too sure about instance type, I am reaching out to see. As far as Smart Store, we aren't using anything of the s... See more...
Hey,  Thank you for assisting. 1PB is incredible. I though 1.7 TB was a lot. I am not too sure about instance type, I am reaching out to see. As far as Smart Store, we aren't using anything of the sort i believe. We just have retention policy to roll over cold/frozen data. Ill look over the docs regarding the smart store.    In regard to min requirements, a lot of our administration servers, Deployment Server, some of the servers handling syslog data, DMZ heavy forwarders, Cluster Managers, etc, all have around 6-8 Cores, and roughly around 6 CPUs.  The Server pictured below/above, is of our Azure  Cluster Manager. It manages a cluster, and may index itself, not sure, but it only has 8 CPU and 4 cores. Should all servers at least meet the minimum requirements? Especially with our ingestion load? I would imagine so. I can work with Support to answer any specific questions as I already have an ODS case open to handle this Splunk version upgrade. This RHEL upgrade is being pushed due to RHEL 7's support expiring.  12 physical CPU cores, or 24 vCPU at 2 GHz or greater speed per core. 12 GB RAM. A 1 Gb Ethernet NIC, optional second NIC for a management network.  
Hello, Is it possible to have only 1 Universal Forwarder installed on a Windows server and this UF sends data to 2 different Splunk instances Ex: 1- Source: IIS logs -> Dest = SplunkCloud 2- Sour... See more...
Hello, Is it possible to have only 1 Universal Forwarder installed on a Windows server and this UF sends data to 2 different Splunk instances Ex: 1- Source: IIS logs -> Dest = SplunkCloud 2- Source: event viewer data -> Dest = On Premise Splunk Enterprise If yes can you point to an article that help setup this? Other possible constraint: we have a deployment server that should allow to setup both flow.   Thanks for your help
Can anyone give me idea or script python to generate a diag file in splunk using python script login to splunk support portal and enter the case number Upload the file automatically 
Hi!   @isoutamo looped me in b/c he knows I'm currently in an Azure environment that's doing ~1Pb/day First, are you using SmartStore to offload older events to blob storage?  If so, around 1TB/da... See more...
Hi!   @isoutamo looped me in b/c he knows I'm currently in an Azure environment that's doing ~1Pb/day First, are you using SmartStore to offload older events to blob storage?  If so, around 1TB/day you're going to want to start thinking about splitting up your cluster because Azure throttles blob upload/download.  That WILL cause latency problems.  And there's also a whole bunch of SmartStore tuning you'll need to consider to minimize cache thrashing.   If you're not using SmartStore then the math goes a completely different way. Generally, what instance types are you using?  We've evaluated the following and find them are more than capable at our scale dasv5-series  dasv6-series  lsv3-series  ebsv5-series  edsv5-series  edsv6-series  If you ARE using SmartStore keep in mind that theres no concept of hot/cold,  just local disk/remote store so some of the faster local NVME may not scale up to what you need for your local cache and going for systems that don't have that and instead can scale your attached disk for your local cache is the way to go. That was our situation which is why we chose instances that don't have local disk, but allow lots of disks to be attached. If you AREN'T using SmartStore, then you'll want to look at the other instance types and leverage the NVME local disk for hot/warm and teh attached disk as cold. Beyond that, it's just a matter of picking the right size of your instance types to meet your SF/RF needs and data Ingest/Search load.   SmartStore/blob storage is really the piece that makes Azure unique.  Let me know if you are using it and we can discuss how to go about splitting your storage account(s) and possibly splitting your cluster.