All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@gcusello but why can't we use HEC token here? Please help me with disadvantages so that I can discuss with my team
@splunklearner  Configure Amazon Kinesis Firehose to send data to the Splunk platform - Splunk Documentation https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureHECdistributed 
Hi @splunklearner , it's always better to use the Add-On. And remember to install the add-on both on HF and SHs. Ciao. Giuseppe
@gcusello Cant we receive the logs directly through HEC token without installing add-on
Hi @splunklearner , if you need to configure an input from AWS to Splunk, you have to install the Splunk Add-on for Amazon Web Services (AWS) ( https://splunkbase.splunk.com/app/1876 )on the Heavy F... See more...
Hi @splunklearner , if you need to configure an input from AWS to Splunk, you have to install the Splunk Add-on for Amazon Web Services (AWS) ( https://splunkbase.splunk.com/app/1876 )on the Heavy Forwarder. Ciao. Giuseppe
@splunklearner  Where to place HEC Scale HTTP Event Collector with distributed deployments - Splunk Documentation Refer the below links: https://docs.splunk.com/Documentation/Splunk/latest/Data/U... See more...
@splunklearner  Where to place HEC Scale HTTP Event Collector with distributed deployments - Splunk Documentation Refer the below links: https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector  https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureHECdistributed 
AWS logs to Splunk We need to onboard AWS cloud watch logs (from Kinesis) to our Splunk. We have all our Splunk instances in AWS cloud. Our architecture is multi site cluster with 3 sites.. 2 indexe... See more...
AWS logs to Splunk We need to onboard AWS cloud watch logs (from Kinesis) to our Splunk. We have all our Splunk instances in AWS cloud. Our architecture is multi site cluster with 3 sites.. 2 indexers in each site, 1 sh in each site, 1 deployment server, 2 CMs and 1 Deployer and 1 HF. everything is configured from AWS end and they are asking to create HEC endpoint in our Splunk in order to receive logs. Here my doubt is where and how I need to configure HEC token in my clustering environment? What details I need to mention there? Please help
@Alan_Chan  The Cloud forwarder package/app contains DNS entries for the indexers in outputs.conf. This enables indexers to be replaced/scaled without the customer having to update outputs.conf Ref... See more...
@Alan_Chan  The Cloud forwarder package/app contains DNS entries for the indexers in outputs.conf. This enables indexers to be replaced/scaled without the customer having to update outputs.conf Refer this links   https://community.splunk.com/t5/Getting-Data-In/How-do-I-get-the-IP-addresses-for-my-Indexers-in-Splunk-Cloud/m-p/388836    https://community.splunk.com/t5/Splunk-Search/I-want-IPs-for-Splunk-Cloud-indexers-for-firewall-rules-How-can/m-p/387482   
I am using Heavy Forwarder to forward log to Splunk cloud, when I installed the splunkclouduf.spl to splunk  autoloadbalancedconnectionstrategy cooked connection to xxx.xxx.xxx.xxx:9997 May I know ... See more...
I am using Heavy Forwarder to forward log to Splunk cloud, when I installed the splunkclouduf.spl to splunk  autoloadbalancedconnectionstrategy cooked connection to xxx.xxx.xxx.xxx:9997 May I know the IP address of Splunk Cloud Indexer and Search head are fixed or changeable?  And for the firewall rule, should I use IP address or DNS to allow the traffic?
Thanks @livehybrid  Converted your finding into a case to rename the numbers. Oddly enough, when I use 'mc_incidents', I don't get any results. But I do have a working model that's almost there -... See more...
Thanks @livehybrid  Converted your finding into a case to rename the numbers. Oddly enough, when I use 'mc_incidents', I don't get any results. But I do have a working model that's almost there - it's just a bit noisy because it shows all alerts linked to a case. That's an easy fix, though; I can just export the data and do a quick pivot to tidy it up. | mcincidents | eval CaseNumber=display_id | join display_id [search index=app_servicenow | rex field=description "(?<Priority>(?<=Priority:)\s*[0-9]{1,4}|(?<=P:)\s*[0-9]{1,4})" | rex field=description "(?<CaseNumber>ES-\d{5})" | eval Priority=trim(Priority) | fields display_id CaseNumber Priority | where isnum(Priority)] | eval Priority=coalesce(Priority, Priority) | fieldformat create_time=strftime(create_time, "%c") | eval _time=create_time, id=title | spath output=collaborators input=collaborators path={}.name | sort -create_time | eval age=toString(now()-create_time, "duration") | eval new_time=strftime(create_time,"%Y-%m-%d %H:%M:%S.%N") | eval time=rtrim(new_time,"0") | eval status_name=case( status == "0", "Unassigned", status == "1", "New", status == "2", "In Progress", status == "3", "Pending", status == "4", "Resolved", status == "5", "Closed", true(), "Unknown" ) | table time, age, status_name, CaseNumber, Priority, name, assignee now to battle the constant SVC Limit searches being aborted (customer is aware of these)
Thanks for you Feedback. I think the ksconf App might be the right Solution for my UseCase.
Hi @Hemant_h  The SH should be relatively simple, if you want to send the data from your SH to Cloud then you will just need to install the Universal Forwarder app which you can download from your S... See more...
Hi @Hemant_h  The SH should be relatively simple, if you want to send the data from your SH to Cloud then you will just need to install the Universal Forwarder app which you can download from your Splunk Cloud instance onto the SH. However if the SH is already sending it's internal logs elsewhere (e.g. internal Indexers) then this change will likely overwrite this setup, you will need to update your outputs.conf to set the [tcpout]/defaultGroup value to a comma delimited list of your existing output group and the new Splunk Cloud output group. The same likely applies to your HF - is your HF not currently sending to Splunk Cloud? If it sends elsewhere and you need to maintain this then you will also need to apply the changes to defaultGroup in addition to installing the forwarder app from your Splunk Cloud environment. For more info check out https://docs.splunk.com/Documentation/Forwarder/9.4.1/Forwarder/Configureforwardingwithoutputs.conf Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Hemant_h  What is different in term of the data flow between the two environments? How are your syslog servers sending to both on-prem and cloud Splunk instances? Are there any TAs installed on... See more...
Hi @Hemant_h  What is different in term of the data flow between the two environments? How are your syslog servers sending to both on-prem and cloud Splunk instances? Are there any TAs installed on either Splunk instance? It could be that the timestamping is incorrect rather than being missing.  There are lots of variables at play here, please let us know as much as you can about your environment, configuration and data pipelines.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @MrLR_02  Splunk does not support saving configuration changes directly to the default directory via Splunk Web; all UI changes are always written to the local directory. If you want to pull the... See more...
Hi @MrLR_02  Splunk does not support saving configuration changes directly to the default directory via Splunk Web; all UI changes are always written to the local directory. If you want to pull these back in to Git then you have a number of options: API Calls to download the knowledge objects and store them on a filesystem (and of course optionally commit to Git). This is my current favourite approach and using this with a couple of customers. We are using a customised version of https://github.com/paychex/splunk-python/blob/main/Splunk2Git/Splunk2Git.py which we use within a CICD pipeline to periodically pull down changes from the remote instance and then merge them into local.  There are Splunkbase apps such as Git Version Control for Splunk which might work well in your scenario - allowing you to sync specific knowledge object types into Git. There is another app/Python tool called KSConf which is great at merging local content in to default. If you have physical access to your dev environement then you might be able to use this in combination with some scripting to merge content and push it in to Git. These are just a few ideas and there are others out there, but from my experience have worked well for me in the past.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Using the Splunk App for SOAR I am creating events in SOAR using a dashboard in Splunk. I'm facing an issue where the same form submission in the dashboard is resulting in multiple artifacts being cr... See more...
Using the Splunk App for SOAR I am creating events in SOAR using a dashboard in Splunk. I'm facing an issue where the same form submission in the dashboard is resulting in multiple artifacts being created in the one event rather than a new event being created for all submissions. Events in Splunk are held for 30 days, this can results in a time sensitive request being requested and run 30 days ago for example, but if it's requested again n those 30 days it won't generate a new event and run the playbook. I could probably add a unique ID to the form submissions which would result in a new container being made (as the artifact values wouldn't be identical) but I was wondering if there's an option in the app or in SOAR to always generate a new container?    Thanks
HI Team, what would be best way to send logs of apps and addon installed on onprem HF and sh to cloud enviroment.
We have implemented the dual ingestion for syslog servers , we can see the logs in cloud but few logs file missing and onprem have all the data  but on cloud we are missing some files which has large... See more...
We have implemented the dual ingestion for syslog servers , we can see the logs in cloud but few logs file missing and onprem have all the data  but on cloud we are missing some files which has large count of logs . PLease help me to understand how we can get the data of those log files in splunk cloud. Splunk cloud logs of syslog server   Syslog server logs on onprem    
Hello, Splunk offers the option of saving changes made in an app via Splunk Web directly to the default directory. By default, Splunk saves all changes made via the Splunk Web interface in the local... See more...
Hello, Splunk offers the option of saving changes made in an app via Splunk Web directly to the default directory. By default, Splunk saves all changes made via the Splunk Web interface in the local directory. Is there a possibility that the changes are saved directly to the default directory? Some more information about the background of the question: For my Splunk instances, the config management is done using Gitlab. All config files in the apps are pushed to the corresponding Splunk instances in the default directory. When I clone an app to my Dev-Splunk instance and make changes, these are saved in the corresponding local directory. Before I can push the changes to my Prod-Splunk instance via Gitlab, I have to manually copy the changes from local/config files to the default/config files. This step is quite tedious as soon as it is not just a single config file. Have any of you already had the same problem and can give me a tip as to whether this is technically possible in Splunk? best regards Lukas
Hi @Abass42  Are you able to see from the splunkd.log which of the outputs are connecting and any error messages around connections? Have a look for "TcpOutputProc" and see if there are any events w... See more...
Hi @Abass42  Are you able to see from the splunkd.log which of the outputs are connecting and any error messages around connections? Have a look for "TcpOutputProc" and see if there are any events which give us any clues. Regarding the trimming of data - This should be something which you can do using Splunk props/transforms - Let me know if you want some assistance with this  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @krishnaunni  Given that you are limited to RHEL 7.9 - I would recommend moving to Splunk 9.2.x (9.2.5) which is supported til Jan 31 2026 RHEL 7.9 is supported up to Splunk Enterprise 9.2.x, sp... See more...
Hi @krishnaunni  Given that you are limited to RHEL 7.9 - I would recommend moving to Splunk 9.2.x (9.2.5) which is supported til Jan 31 2026 RHEL 7.9 is supported up to Splunk Enterprise 9.2.x, specifically it is Kernel 3.x which is supported up to 9.2.x however is marked as deprecated - meaning that from future versions it is no longer supported. "Splunk supports this platform and architecture, but might remove support in a future release" Kernel 3.x is listed as removed from the 9.3.x build release notes: https://docs.splunk.com/Documentation/Splunk/9.3.0/ReleaseNotes/Deprecatedfeatures#:~:text=in%20this%20version.-,Removed%20operating%20systems%20in%20version%209.3,-The%20following%20table Regarding your mention of HF/DS - these are actually the same installation package - Splunk Enterprise is the installation and then the configuration applied to it determines whether it is a HF / DS / SearchHead (SH) etc, with the exception of the Universal Forwarder (UF) which is a smaller package with fewer features available (such as Python environment etc).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing