All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks, @livehybrid, this is very close to what I need. What I ultimately want though, is to make these automatic lookups. We actually have about ten different ones that we need to apply to this part... See more...
Thanks, @livehybrid, this is very close to what I need. What I ultimately want though, is to make these automatic lookups. We actually have about ten different ones that we need to apply to this particular sourcetype. I just can't seem to figure out how to add something like msg.message_set{}.type to an automatic lookup and have it work.
@splunklearner  If you want a pull model there https://splunkbase.splunk.com/app/1876 For a push model, I believe HEC is the recommended approach Best Practices for Splunk HTTP Event Collector: ... See more...
@splunklearner  If you want a pull model there https://splunkbase.splunk.com/app/1876 For a push model, I believe HEC is the recommended approach Best Practices for Splunk HTTP Event Collector: Always configure HEC to use HTTPS to ensure data confidentiality during transmission. Enable SSL/TLS encryption and leverage certificate-based authentication to authenticate the sender and receiver. Consider the expected data volume and plan your HEC deployment accordingly. Distribute the load by deploying multiple HEC instances and using load balancers to ensure high availability and optimal performance. Implement proper input validation and filtering mechanisms to prevent unauthorized or malicious data from entering your Splunk environment. Use whitelists, blacklists, and regex patterns to define data validation rules. Regularly monitor the HEC pipeline to ensure data ingestion is successful. Implement proper error handling mechanisms and configure alerts to notify administrators in case of failures or issues. Some common challenges associated with Splunk HEC: While HEC is designed to handle high volumes of data, organisations with extremely large-scale deployments may face challenges related to scalability and performance. It's important to carefully plan the HEC deployment, consider load balancing mechanisms, and optimize configurations to ensure optimal performance. As HEC relies on network connectivity for data ingestion, any issues with network availability or reliability can impact the ingestion process. Organizations should have robust network infrastructure and redundancy measures in place to minimize downtime and ensure uninterrupted data flow. While HEC provides authentication mechanisms and supports SSL/TLS encryption, configuring and managing authentication and security settings can be complex. Organizations need to properly configure user access controls, certificates, and encryption protocols to ensure secure data transmission and prevent unauthorized access. HEC allows data ingestion from various sources, making it crucial to implement proper input validation and filtering mechanisms. Ensuring the integrity and quality of the ingested data requires defining validation rules, whitelists, blacklists, and regular expressions to filter out unwanted or malicious data. Monitoring the HEC pipeline and troubleshooting any issues that may arise can be challenging. Organizations should establish proper monitoring processes to track the health and performance of HEC instances, implement logging and alerting mechanisms, and have troubleshooting strategies in place to quickly identify and resolve any problems. Integrating HEC with different data sources, applications, and systems can pose compatibility challenges. It's important to ensure that the data sources are compatible with HEC and have the necessary configurations in place for seamless integration. Configuring and maintaining HEC instances and associated settings require technical expertise and ongoing effort. Organizations need to keep HEC configurations up to date, apply patches and updates, and regularly review and optimize settings to ensure optimal performance and security.
Hi @splunklearner , the HEC is the channel to receive data, but the inputs and the parsing and normalization rules are in the add-on. Infact the link you shared is a description of the add-on confi... See more...
Hi @splunklearner , the HEC is the channel to receive data, but the inputs and the parsing and normalization rules are in the add-on. Infact the link you shared is a description of the add-on configuration process: it isn't sufficient to configure the token to send data, you need also to configure the add-on to define the inputs to enable. Ciao. Giuseppe
Hi Splunk Cloud indexer IP addresses can change over time; they are not fixed as they may scale in/out or be replaced during upgrades. Always use the provided DNS hostname (e.g., inputsXX.stackName... See more...
Hi Splunk Cloud indexer IP addresses can change over time; they are not fixed as they may scale in/out or be replaced during upgrades. Always use the provided DNS hostname (e.g., inputsXX.stackName.splunkcloud.com) for forwarding and firewall rules. For firewall configuration, allow outbound traffic on port 9997 to the Splunk Cloud DNS hostname, not specific IPs.  Splunk Cloud Search Heads IPs are less likely to change as they are updated less frequently but there is no guarantees that they will persist. Some useful docs: https://docs.splunk.com/Documentation/SplunkCloud/latest/Service/SplunkCloudservice#Network_connectivity_and_data_transfer Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunklearner  You need to send your Amazon Kinesis Firehose data to an Elastic Load Balancer (ELB) with sticky sessions enabled and cookie expiration disabled. Kinesis uses Indexer Acknowledge... See more...
Hi @splunklearner  You need to send your Amazon Kinesis Firehose data to an Elastic Load Balancer (ELB) with sticky sessions enabled and cookie expiration disabled. Kinesis uses Indexer Acknowledgement so its important that the LB is configured correctly as the sticky session setting is required in order that Kinesis reaches the correct HF/Indexer to check the acknowledgement. Regarding the endpoint/service behind the ELB - This can be either HF or your indexer cluster, depending on your configuration. You should also install the Splunk Add-on for Amazon Web Services (AWS) which has the appropriate field extractions etc *if you are sending AWS data*. If you are sending your own application data then this may not be required, this depends on the processing done within Kinesis.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@gcusello but why can't we use HEC token here? Please help me with disadvantages so that I can discuss with my team
@splunklearner  Configure Amazon Kinesis Firehose to send data to the Splunk platform - Splunk Documentation https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureHECdistributed 
Hi @splunklearner , it's always better to use the Add-On. And remember to install the add-on both on HF and SHs. Ciao. Giuseppe
@gcusello Cant we receive the logs directly through HEC token without installing add-on
Hi @splunklearner , if you need to configure an input from AWS to Splunk, you have to install the Splunk Add-on for Amazon Web Services (AWS) ( https://splunkbase.splunk.com/app/1876 )on the Heavy F... See more...
Hi @splunklearner , if you need to configure an input from AWS to Splunk, you have to install the Splunk Add-on for Amazon Web Services (AWS) ( https://splunkbase.splunk.com/app/1876 )on the Heavy Forwarder. Ciao. Giuseppe
@splunklearner  Where to place HEC Scale HTTP Event Collector with distributed deployments - Splunk Documentation Refer the below links: https://docs.splunk.com/Documentation/Splunk/latest/Data/U... See more...
@splunklearner  Where to place HEC Scale HTTP Event Collector with distributed deployments - Splunk Documentation Refer the below links: https://docs.splunk.com/Documentation/Splunk/latest/Data/UsetheHTTPEventCollector  https://docs.splunk.com/Documentation/AddOns/released/Firehose/ConfigureHECdistributed 
AWS logs to Splunk We need to onboard AWS cloud watch logs (from Kinesis) to our Splunk. We have all our Splunk instances in AWS cloud. Our architecture is multi site cluster with 3 sites.. 2 indexe... See more...
AWS logs to Splunk We need to onboard AWS cloud watch logs (from Kinesis) to our Splunk. We have all our Splunk instances in AWS cloud. Our architecture is multi site cluster with 3 sites.. 2 indexers in each site, 1 sh in each site, 1 deployment server, 2 CMs and 1 Deployer and 1 HF. everything is configured from AWS end and they are asking to create HEC endpoint in our Splunk in order to receive logs. Here my doubt is where and how I need to configure HEC token in my clustering environment? What details I need to mention there? Please help
@Alan_Chan  The Cloud forwarder package/app contains DNS entries for the indexers in outputs.conf. This enables indexers to be replaced/scaled without the customer having to update outputs.conf Ref... See more...
@Alan_Chan  The Cloud forwarder package/app contains DNS entries for the indexers in outputs.conf. This enables indexers to be replaced/scaled without the customer having to update outputs.conf Refer this links   https://community.splunk.com/t5/Getting-Data-In/How-do-I-get-the-IP-addresses-for-my-Indexers-in-Splunk-Cloud/m-p/388836    https://community.splunk.com/t5/Splunk-Search/I-want-IPs-for-Splunk-Cloud-indexers-for-firewall-rules-How-can/m-p/387482   
I am using Heavy Forwarder to forward log to Splunk cloud, when I installed the splunkclouduf.spl to splunk  autoloadbalancedconnectionstrategy cooked connection to xxx.xxx.xxx.xxx:9997 May I know ... See more...
I am using Heavy Forwarder to forward log to Splunk cloud, when I installed the splunkclouduf.spl to splunk  autoloadbalancedconnectionstrategy cooked connection to xxx.xxx.xxx.xxx:9997 May I know the IP address of Splunk Cloud Indexer and Search head are fixed or changeable?  And for the firewall rule, should I use IP address or DNS to allow the traffic?
Thanks @livehybrid  Converted your finding into a case to rename the numbers. Oddly enough, when I use 'mc_incidents', I don't get any results. But I do have a working model that's almost there -... See more...
Thanks @livehybrid  Converted your finding into a case to rename the numbers. Oddly enough, when I use 'mc_incidents', I don't get any results. But I do have a working model that's almost there - it's just a bit noisy because it shows all alerts linked to a case. That's an easy fix, though; I can just export the data and do a quick pivot to tidy it up. | mcincidents | eval CaseNumber=display_id | join display_id [search index=app_servicenow | rex field=description "(?<Priority>(?<=Priority:)\s*[0-9]{1,4}|(?<=P:)\s*[0-9]{1,4})" | rex field=description "(?<CaseNumber>ES-\d{5})" | eval Priority=trim(Priority) | fields display_id CaseNumber Priority | where isnum(Priority)] | eval Priority=coalesce(Priority, Priority) | fieldformat create_time=strftime(create_time, "%c") | eval _time=create_time, id=title | spath output=collaborators input=collaborators path={}.name | sort -create_time | eval age=toString(now()-create_time, "duration") | eval new_time=strftime(create_time,"%Y-%m-%d %H:%M:%S.%N") | eval time=rtrim(new_time,"0") | eval status_name=case( status == "0", "Unassigned", status == "1", "New", status == "2", "In Progress", status == "3", "Pending", status == "4", "Resolved", status == "5", "Closed", true(), "Unknown" ) | table time, age, status_name, CaseNumber, Priority, name, assignee now to battle the constant SVC Limit searches being aborted (customer is aware of these)
Thanks for you Feedback. I think the ksconf App might be the right Solution for my UseCase.
Hi @Hemant_h  The SH should be relatively simple, if you want to send the data from your SH to Cloud then you will just need to install the Universal Forwarder app which you can download from your S... See more...
Hi @Hemant_h  The SH should be relatively simple, if you want to send the data from your SH to Cloud then you will just need to install the Universal Forwarder app which you can download from your Splunk Cloud instance onto the SH. However if the SH is already sending it's internal logs elsewhere (e.g. internal Indexers) then this change will likely overwrite this setup, you will need to update your outputs.conf to set the [tcpout]/defaultGroup value to a comma delimited list of your existing output group and the new Splunk Cloud output group. The same likely applies to your HF - is your HF not currently sending to Splunk Cloud? If it sends elsewhere and you need to maintain this then you will also need to apply the changes to defaultGroup in addition to installing the forwarder app from your Splunk Cloud environment. For more info check out https://docs.splunk.com/Documentation/Forwarder/9.4.1/Forwarder/Configureforwardingwithoutputs.conf Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Hemant_h  What is different in term of the data flow between the two environments? How are your syslog servers sending to both on-prem and cloud Splunk instances? Are there any TAs installed on... See more...
Hi @Hemant_h  What is different in term of the data flow between the two environments? How are your syslog servers sending to both on-prem and cloud Splunk instances? Are there any TAs installed on either Splunk instance? It could be that the timestamping is incorrect rather than being missing.  There are lots of variables at play here, please let us know as much as you can about your environment, configuration and data pipelines.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @MrLR_02  Splunk does not support saving configuration changes directly to the default directory via Splunk Web; all UI changes are always written to the local directory. If you want to pull the... See more...
Hi @MrLR_02  Splunk does not support saving configuration changes directly to the default directory via Splunk Web; all UI changes are always written to the local directory. If you want to pull these back in to Git then you have a number of options: API Calls to download the knowledge objects and store them on a filesystem (and of course optionally commit to Git). This is my current favourite approach and using this with a couple of customers. We are using a customised version of https://github.com/paychex/splunk-python/blob/main/Splunk2Git/Splunk2Git.py which we use within a CICD pipeline to periodically pull down changes from the remote instance and then merge them into local.  There are Splunkbase apps such as Git Version Control for Splunk which might work well in your scenario - allowing you to sync specific knowledge object types into Git. There is another app/Python tool called KSConf which is great at merging local content in to default. If you have physical access to your dev environement then you might be able to use this in combination with some scripting to merge content and push it in to Git. These are just a few ideas and there are others out there, but from my experience have worked well for me in the past.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Using the Splunk App for SOAR I am creating events in SOAR using a dashboard in Splunk. I'm facing an issue where the same form submission in the dashboard is resulting in multiple artifacts being cr... See more...
Using the Splunk App for SOAR I am creating events in SOAR using a dashboard in Splunk. I'm facing an issue where the same form submission in the dashboard is resulting in multiple artifacts being created in the one event rather than a new event being created for all submissions. Events in Splunk are held for 30 days, this can results in a time sensitive request being requested and run 30 days ago for example, but if it's requested again n those 30 days it won't generate a new event and run the playbook. I could probably add a unique ID to the form submissions which would result in a new container being made (as the artifact values wouldn't be identical) but I was wondering if there's an option in the app or in SOAR to always generate a new container?    Thanks