All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @splunklearner , the HEC is the channel to receive data, but the inputs and the parsing and normalization rules are in the add-on. Infact the link you shared is a description of the add-on confi... See more...
Hi @splunklearner , the HEC is the channel to receive data, but the inputs and the parsing and normalization rules are in the add-on. Infact the link you shared is a description of the add-on configuration process: it isn't sufficient to configure the token to send data, you need also to configure the add-on to define the inputs to enable. Ciao. Giuseppe
Hi Splunk Cloud indexer IP addresses can change over time; they are not fixed as they may scale in/out or be replaced during upgrades. Always use the provided DNS hostname (e.g., inputsXX.stackName... See more...
Hi Splunk Cloud indexer IP addresses can change over time; they are not fixed as they may scale in/out or be replaced during upgrades. Always use the provided DNS hostname (e.g., inputsXX.stackName.splunkcloud.com) for forwarding and firewall rules. For firewall configuration, allow outbound traffic on port 9997 to the Splunk Cloud DNS hostname, not specific IPs.  Splunk Cloud Search Heads IPs are less likely to change as they are updated less frequently but there is no guarantees that they will persist. Some useful docs: https://docs.splunk.com/Documentation/SplunkCloud/latest/Service/SplunkCloudservice#Network_connectivity_and_data_transfer Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunklearner  You need to send your Amazon Kinesis Firehose data to an Elastic Load Balancer (ELB) with sticky sessions enabled and cookie expiration disabled. Kinesis uses Indexer Acknowledge... See more...
Hi @splunklearner  You need to send your Amazon Kinesis Firehose data to an Elastic Load Balancer (ELB) with sticky sessions enabled and cookie expiration disabled. Kinesis uses Indexer Acknowledgement so its important that the LB is configured correctly as the sticky session setting is required in order that Kinesis reaches the correct HF/Indexer to check the acknowledgement. Regarding the endpoint/service behind the ELB - This can be either HF or your indexer cluster, depending on your configuration. You should also install the Splunk Add-on for Amazon Web Services (AWS) which has the appropriate field extractions etc *if you are sending AWS data*. If you are sending your own application data then this may not be required, this depends on the processing done within Kinesis.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@gcusello but why can't we use HEC token here? Please help me with disadvantages so that I can discuss with my team
@splunklearner  Configure Amazon Kinesis Firehose to send data to the Splunk platform - Splunk Documentation https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureHECdistributed 
Hi @splunklearner , it's always better to use the Add-On. And remember to install the add-on both on HF and SHs. Ciao. Giuseppe
@gcusello Cant we receive the logs directly through HEC token without installing add-on
Hi @splunklearner , if you need to configure an input from AWS to Splunk, you have to install the Splunk Add-on for Amazon Web Services (AWS) ( https://splunkbase.splunk.com/app/1876 )on the Heavy F... See more...
Hi @splunklearner , if you need to configure an input from AWS to Splunk, you have to install the Splunk Add-on for Amazon Web Services (AWS) ( https://splunkbase.splunk.com/app/1876 )on the Heavy Forwarder. Ciao. Giuseppe
@splunklearner  Where to place HEC Scale HTTP Event Collector with distributed deployments - Splunk Documentation Refer the below links: https://docs.splunk.com/Documentation/Splunk/latest/Data/U... See more...
@splunklearner  Where to place HEC Scale HTTP Event Collector with distributed deployments - Splunk Documentation Refer the below links: https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector  https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureHECdistributed 
AWS logs to Splunk We need to onboard AWS cloud watch logs (from Kinesis) to our Splunk. We have all our Splunk instances in AWS cloud. Our architecture is multi site cluster with 3 sites.. 2 indexe... See more...
AWS logs to Splunk We need to onboard AWS cloud watch logs (from Kinesis) to our Splunk. We have all our Splunk instances in AWS cloud. Our architecture is multi site cluster with 3 sites.. 2 indexers in each site, 1 sh in each site, 1 deployment server, 2 CMs and 1 Deployer and 1 HF. everything is configured from AWS end and they are asking to create HEC endpoint in our Splunk in order to receive logs. Here my doubt is where and how I need to configure HEC token in my clustering environment? What details I need to mention there? Please help
@Alan_Chan  The Cloud forwarder package/app contains DNS entries for the indexers in outputs.conf. This enables indexers to be replaced/scaled without the customer having to update outputs.conf Ref... See more...
@Alan_Chan  The Cloud forwarder package/app contains DNS entries for the indexers in outputs.conf. This enables indexers to be replaced/scaled without the customer having to update outputs.conf Refer this links   https://community.splunk.com/t5/Getting-Data-In/How-do-I-get-the-IP-addresses-for-my-Indexers-in-Splunk-Cloud/m-p/388836    https://community.splunk.com/t5/Splunk-Search/I-want-IPs-for-Splunk-Cloud-indexers-for-firewall-rules-How-can/m-p/387482   
I am using Heavy Forwarder to forward log to Splunk cloud, when I installed the splunkclouduf.spl to splunk  autoloadbalancedconnectionstrategy cooked connection to xxx.xxx.xxx.xxx:9997 May I know ... See more...
I am using Heavy Forwarder to forward log to Splunk cloud, when I installed the splunkclouduf.spl to splunk  autoloadbalancedconnectionstrategy cooked connection to xxx.xxx.xxx.xxx:9997 May I know the IP address of Splunk Cloud Indexer and Search head are fixed or changeable?  And for the firewall rule, should I use IP address or DNS to allow the traffic?
Thanks @livehybrid  Converted your finding into a case to rename the numbers. Oddly enough, when I use 'mc_incidents', I don't get any results. But I do have a working model that's almost there -... See more...
Thanks @livehybrid  Converted your finding into a case to rename the numbers. Oddly enough, when I use 'mc_incidents', I don't get any results. But I do have a working model that's almost there - it's just a bit noisy because it shows all alerts linked to a case. That's an easy fix, though; I can just export the data and do a quick pivot to tidy it up. | mcincidents | eval CaseNumber=display_id | join display_id [search index=app_servicenow | rex field=description "(?<Priority>(?<=Priority:)\s*[0-9]{1,4}|(?<=P:)\s*[0-9]{1,4})" | rex field=description "(?<CaseNumber>ES-\d{5})" | eval Priority=trim(Priority) | fields display_id CaseNumber Priority | where isnum(Priority)] | eval Priority=coalesce(Priority, Priority) | fieldformat create_time=strftime(create_time, "%c") | eval _time=create_time, id=title | spath output=collaborators input=collaborators path={}.name | sort -create_time | eval age=toString(now()-create_time, "duration") | eval new_time=strftime(create_time,"%Y-%m-%d %H:%M:%S.%N") | eval time=rtrim(new_time,"0") | eval status_name=case( status == "0", "Unassigned", status == "1", "New", status == "2", "In Progress", status == "3", "Pending", status == "4", "Resolved", status == "5", "Closed", true(), "Unknown" ) | table time, age, status_name, CaseNumber, Priority, name, assignee now to battle the constant SVC Limit searches being aborted (customer is aware of these)
Thanks for you Feedback. I think the ksconf App might be the right Solution for my UseCase.
Hi @Hemant_h  The SH should be relatively simple, if you want to send the data from your SH to Cloud then you will just need to install the Universal Forwarder app which you can download from your S... See more...
Hi @Hemant_h  The SH should be relatively simple, if you want to send the data from your SH to Cloud then you will just need to install the Universal Forwarder app which you can download from your Splunk Cloud instance onto the SH. However if the SH is already sending it's internal logs elsewhere (e.g. internal Indexers) then this change will likely overwrite this setup, you will need to update your outputs.conf to set the [tcpout]/defaultGroup value to a comma delimited list of your existing output group and the new Splunk Cloud output group. The same likely applies to your HF - is your HF not currently sending to Splunk Cloud? If it sends elsewhere and you need to maintain this then you will also need to apply the changes to defaultGroup in addition to installing the forwarder app from your Splunk Cloud environment. For more info check out https://docs.splunk.com/Documentation/Forwarder/9.4.1/Forwarder/Configureforwardingwithoutputs.conf Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Hemant_h  What is different in term of the data flow between the two environments? How are your syslog servers sending to both on-prem and cloud Splunk instances? Are there any TAs installed on... See more...
Hi @Hemant_h  What is different in term of the data flow between the two environments? How are your syslog servers sending to both on-prem and cloud Splunk instances? Are there any TAs installed on either Splunk instance? It could be that the timestamping is incorrect rather than being missing.  There are lots of variables at play here, please let us know as much as you can about your environment, configuration and data pipelines.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @MrLR_02  Splunk does not support saving configuration changes directly to the default directory via Splunk Web; all UI changes are always written to the local directory. If you want to pull the... See more...
Hi @MrLR_02  Splunk does not support saving configuration changes directly to the default directory via Splunk Web; all UI changes are always written to the local directory. If you want to pull these back in to Git then you have a number of options: API Calls to download the knowledge objects and store them on a filesystem (and of course optionally commit to Git). This is my current favourite approach and using this with a couple of customers. We are using a customised version of https://github.com/paychex/splunk-python/blob/main/Splunk2Git/Splunk2Git.py which we use within a CICD pipeline to periodically pull down changes from the remote instance and then merge them into local.  There are Splunkbase apps such as Git Version Control for Splunk which might work well in your scenario - allowing you to sync specific knowledge object types into Git. There is another app/Python tool called KSConf which is great at merging local content in to default. If you have physical access to your dev environement then you might be able to use this in combination with some scripting to merge content and push it in to Git. These are just a few ideas and there are others out there, but from my experience have worked well for me in the past.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Using the Splunk App for SOAR I am creating events in SOAR using a dashboard in Splunk. I'm facing an issue where the same form submission in the dashboard is resulting in multiple artifacts being cr... See more...
Using the Splunk App for SOAR I am creating events in SOAR using a dashboard in Splunk. I'm facing an issue where the same form submission in the dashboard is resulting in multiple artifacts being created in the one event rather than a new event being created for all submissions. Events in Splunk are held for 30 days, this can results in a time sensitive request being requested and run 30 days ago for example, but if it's requested again n those 30 days it won't generate a new event and run the playbook. I could probably add a unique ID to the form submissions which would result in a new container being made (as the artifact values wouldn't be identical) but I was wondering if there's an option in the app or in SOAR to always generate a new container?    Thanks
HI Team, what would be best way to send logs of apps and addon installed on onprem HF and sh to cloud enviroment.
We have implemented the dual ingestion for syslog servers , we can see the logs in cloud but few logs file missing and onprem have all the data  but on cloud we are missing some files which has large... See more...
We have implemented the dual ingestion for syslog servers , we can see the logs in cloud but few logs file missing and onprem have all the data  but on cloud we are missing some files which has large count of logs . PLease help me to understand how we can get the data of those log files in splunk cloud. Splunk cloud logs of syslog server   Syslog server logs on onprem