All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I guess this is because of the "by field" - Are you wanting to count by another field or just a total count? you can remove the by <fieldName> if you dont need that.  Did this answer help you? If ... See more...
I guess this is because of the "by field" - Are you wanting to count by another field or just a total count? you can remove the by <fieldName> if you dont need that.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I created a KV Store lookup using the "Splunk App for Lookup File Editing" app, however when I look at Settings>Lookups, the lookup definition doesn't show up.  In addition, when running | inputlook... See more...
I created a KV Store lookup using the "Splunk App for Lookup File Editing" app, however when I look at Settings>Lookups, the lookup definition doesn't show up.  In addition, when running | inputlookup <name> I get the error "The lookup table '<name>' requires a .csv or KV store lookup definition"   What do I miss? 
What distinguishes the first event from the second? Assuming it is a line with "lost connection", you could try something like this | makeresults | fields - _time | eval _raw="FAILED to copy checksu... See more...
What distinguishes the first event from the second? Assuming it is a line with "lost connection", you could try something like this | makeresults | fields - _time | eval _raw="FAILED to copy checksum for: /logs/archives/archived-logs/server01.log.gz Host key verification failed. lost connection" | append [| makeresults | fields - _time | eval _raw="FAILED to copy checksum for: /logs/archives/archived-logs/server02.log.gz You are attempting to access a system owned by XYZ Provide proper credentials for access Contact the system administrator for assistance ---This system is monitored--- Details as follows. scp: /logs/rsyslog/server02/: Not a directory"] | rex "(?m)FAILED to copy checksum for:[^\n]+\n([^\n]+\n)*(?!lost connection)(?<line>[^\n]+(\nlost connection|$))"
Very helpful, thank you. I will now play with eventstats, to try to refine the results. Using stats I had two rows, one for each where the count > X. Now with eventstats I get individual rows for eac... See more...
Very helpful, thank you. I will now play with eventstats, to try to refine the results. Using stats I had two rows, one for each where the count > X. Now with eventstats I get individual rows for each of the events that made up the two stats rows. So, for X=4, I had 2 rows with counts of 9 and 5, respectively. Now I'm seeing 14 events returned. Definitely closer to what I'm looking for.
@livehybrid  - I need the last 2 lines of the first event, and the last line of the second event. I honestly don't know if this is even possible. The events start with "FAILED to copy checksum for: ... See more...
@livehybrid  - I need the last 2 lines of the first event, and the last line of the second event. I honestly don't know if this is even possible. The events start with "FAILED to copy checksum for: " I will work with what you have sent and see what I get for results. Thank you.
I can not find anything in the outputs.conf that will allow you to control the HTTP version sourced at the UF itself.  Splunk documentation implies a LB can/should be used and can control HTTP versio... See more...
I can not find anything in the outputs.conf that will allow you to control the HTTP version sourced at the UF itself.  Splunk documentation implies a LB can/should be used and can control HTTP version.  Their example is NGINX but there are others out there which may or may not support in the same fashion. https://docs.splunk.com/Documentation/Forwarder/9.4.0/Forwarder/Configureforwardingwithoutputs.conf#Send_data_over_HTTP_using_a_load_balancer  
Hi @spm807  Once you have used "stats" you will have a statistics table with your summarised data output. At this point you are not able to view the original events.  Depending on your usecase you ... See more...
Hi @spm807  Once you have used "stats" you will have a statistics table with your summarised data output. At this point you are not able to view the original events.  Depending on your usecase you may find that "eventstats" is more useful? | eventstats count as total_count by some_field This will create the count (total_count in this example) whilst still retaining the original events.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
How do I show details of individual records in a count total? I have a query that counts events, and then returns the total count when it's above a specified threshold. How do I display the individua... See more...
How do I show details of individual records in a count total? I have a query that counts events, and then returns the total count when it's above a specified threshold. How do I display the individual events that constitute that count total? But only for those totals where the count exceeds the threshold?
Hi @TheJagoff  Im struggling a little to work out the boundaries between the events but I think I might have it now...Just to check - is it the last line in each event that you want to extract? If s... See more...
Hi @TheJagoff  Im struggling a little to work out the boundaries between the events but I think I might have it now...Just to check - is it the last line in each event that you want to extract? If so the following might work well: | rex max_match=100 field=_raw "(?m)(?<message>[^\n\r]+)$" | eval last_line = mvindex(message, -1) Incase its useful for future responses, below is the full example with some makeresults to emulate your events. |makeresults | eval _raw="FAILED to copy checksum for: /logs/archives/archived-logs/server01.log.gz Host key verification failed. lost connection" | append [|makeresults | eval _raw="FAILED to copy checksum for: /logs/archives/archived-logs/server02.log.gz You are attempting to access a system owned by XYZ Provide proper credentials for access Contact the system administrator for assistance ---This system is monitored--- Details as follows. scp: /logs/rsyslog/server02/: Not a directory"] | rex max_match=100 field=_raw "(?m)(?<message>[^\n\r]+)$" | eval last_line = mvindex(message, -1)  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi There isnt quite enough info in the post to work out exactly what you need - however the following should get you started. Note: I wouldnt recommend looking back 24 hours every time, what is the ... See more...
Hi There isnt quite enough info in the post to work out exactly what you need - however the following should get you started. Note: I wouldnt recommend looking back 24 hours every time, what is the reason for this? I would recommend just looking back 60 minutes, you could use earliest=-70m latest=-10m to make sure you get data which is up to 10 minutes late arriving. Run a search which returns the events you want to be alerted on: index=your_index action=update subcategory=WEB_DLP_POLICY earliest=-1d@d latest=now Click Save As-> Alert. Name the alert and change the settings to make it run hourly.  You'll need to make sure you apply some throttling because otherwise you may get the alerts duplicated every time it runs. Then scroll down and setup your chosen Alert action - presumably Email? Configure this according to your requirements and then save. Some useful docs: https://docs.splunk.com/Documentation/Splunk/9.4.1/Alert/Definescheduledalerts Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have multiline events where it is required to capture the error messages. The events are separated by "FAILED". I need to capture "Host key verification failed" from the first event, "scp: /logs/... See more...
I have multiline events where it is required to capture the error messages. The events are separated by "FAILED". I need to capture "Host key verification failed" from the first event, "scp: /logs/rsyslog/server02/: Not a directory" from the second event. The events: FAILED to copy checksum for: /logs/archives/archived-logs/server01.log.gz Host key verification failed. lost connection FAILED to copy checksum for: /logs/archives/archived-logs/server02.log.gz You are attempting to access a system owned by XYZ Provide proper credentials for access Contact the system administrator for assistance ---This system is monitored--- Details as follows. scp: /logs/rsyslog/server02/: Not a directory   I can capture the first message with: FAILED.+\:\s(?<LogFile>.+)(\n)(?<Message>.+(\n).+) I don't know how to skip to capture the last line of the second event for the Message field. Any help is most appreciated. Thank you  
What have you tried so far?  Where did you get stuck? Why check the whole day every hour?  If nothing was found in the 00:00-01:00 period at the 01:00 run then nothing will be found in the same peri... See more...
What have you tried so far?  Where did you get stuck? Why check the whole day every hour?  If nothing was found in the 00:00-01:00 period at the 01:00 run then nothing will be found in the same period at the 02:00 run.  Searching the same data repeatedly is a waste of resources.
Would like to configure an alert that will trigger based on the action and subcategory below.  Would like this to run hourly to check if there are any hits daily. action=update subcategory=WEB_DLP_P... See more...
Would like to configure an alert that will trigger based on the action and subcategory below.  Would like this to run hourly to check if there are any hits daily. action=update subcategory=WEB_DLP_POLICY
Hi @bpenny  If you're looking to do it as an automatic lookup then you should be able to use the following, configured from Settings -> Lookups -> Automatic Lookups. Or as a props.conf: [yourS... See more...
Hi @bpenny  If you're looking to do it as an automatic lookup then you should be able to use the following, configured from Settings -> Lookups -> Automatic Lookups. Or as a props.conf: [yourSourceType] LOOKUP-lookup1 = yourLookupName type AS "msg.message_set{}.type" OUTPUTNEW typeDescription AS typeDescription  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Thanks, @livehybrid, this is very close to what I need. What I ultimately want though, is to make these automatic lookups. We actually have about ten different ones that we need to apply to this part... See more...
Thanks, @livehybrid, this is very close to what I need. What I ultimately want though, is to make these automatic lookups. We actually have about ten different ones that we need to apply to this particular sourcetype. I just can't seem to figure out how to add something like msg.message_set{}.type to an automatic lookup and have it work.
@splunklearner  If you want a pull model there https://splunkbase.splunk.com/app/1876 For a push model, I believe HEC is the recommended approach Best Practices for Splunk HTTP Event Collector: ... See more...
@splunklearner  If you want a pull model there https://splunkbase.splunk.com/app/1876 For a push model, I believe HEC is the recommended approach Best Practices for Splunk HTTP Event Collector: Always configure HEC to use HTTPS to ensure data confidentiality during transmission. Enable SSL/TLS encryption and leverage certificate-based authentication to authenticate the sender and receiver. Consider the expected data volume and plan your HEC deployment accordingly. Distribute the load by deploying multiple HEC instances and using load balancers to ensure high availability and optimal performance. Implement proper input validation and filtering mechanisms to prevent unauthorized or malicious data from entering your Splunk environment. Use whitelists, blacklists, and regex patterns to define data validation rules. Regularly monitor the HEC pipeline to ensure data ingestion is successful. Implement proper error handling mechanisms and configure alerts to notify administrators in case of failures or issues. Some common challenges associated with Splunk HEC: While HEC is designed to handle high volumes of data, organisations with extremely large-scale deployments may face challenges related to scalability and performance. It's important to carefully plan the HEC deployment, consider load balancing mechanisms, and optimize configurations to ensure optimal performance. As HEC relies on network connectivity for data ingestion, any issues with network availability or reliability can impact the ingestion process. Organizations should have robust network infrastructure and redundancy measures in place to minimize downtime and ensure uninterrupted data flow. While HEC provides authentication mechanisms and supports SSL/TLS encryption, configuring and managing authentication and security settings can be complex. Organizations need to properly configure user access controls, certificates, and encryption protocols to ensure secure data transmission and prevent unauthorized access. HEC allows data ingestion from various sources, making it crucial to implement proper input validation and filtering mechanisms. Ensuring the integrity and quality of the ingested data requires defining validation rules, whitelists, blacklists, and regular expressions to filter out unwanted or malicious data. Monitoring the HEC pipeline and troubleshooting any issues that may arise can be challenging. Organizations should establish proper monitoring processes to track the health and performance of HEC instances, implement logging and alerting mechanisms, and have troubleshooting strategies in place to quickly identify and resolve any problems. Integrating HEC with different data sources, applications, and systems can pose compatibility challenges. It's important to ensure that the data sources are compatible with HEC and have the necessary configurations in place for seamless integration. Configuring and maintaining HEC instances and associated settings require technical expertise and ongoing effort. Organizations need to keep HEC configurations up to date, apply patches and updates, and regularly review and optimize settings to ensure optimal performance and security.
Hi @splunklearner , the HEC is the channel to receive data, but the inputs and the parsing and normalization rules are in the add-on. Infact the link you shared is a description of the add-on confi... See more...
Hi @splunklearner , the HEC is the channel to receive data, but the inputs and the parsing and normalization rules are in the add-on. Infact the link you shared is a description of the add-on configuration process: it isn't sufficient to configure the token to send data, you need also to configure the add-on to define the inputs to enable. Ciao. Giuseppe
Hi Splunk Cloud indexer IP addresses can change over time; they are not fixed as they may scale in/out or be replaced during upgrades. Always use the provided DNS hostname (e.g., inputsXX.stackName... See more...
Hi Splunk Cloud indexer IP addresses can change over time; they are not fixed as they may scale in/out or be replaced during upgrades. Always use the provided DNS hostname (e.g., inputsXX.stackName.splunkcloud.com) for forwarding and firewall rules. For firewall configuration, allow outbound traffic on port 9997 to the Splunk Cloud DNS hostname, not specific IPs.  Splunk Cloud Search Heads IPs are less likely to change as they are updated less frequently but there is no guarantees that they will persist. Some useful docs: https://docs.splunk.com/Documentation/SplunkCloud/latest/Service/SplunkCloudservice#Network_connectivity_and_data_transfer Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @splunklearner  You need to send your Amazon Kinesis Firehose data to an Elastic Load Balancer (ELB) with sticky sessions enabled and cookie expiration disabled. Kinesis uses Indexer Acknowledge... See more...
Hi @splunklearner  You need to send your Amazon Kinesis Firehose data to an Elastic Load Balancer (ELB) with sticky sessions enabled and cookie expiration disabled. Kinesis uses Indexer Acknowledgement so its important that the LB is configured correctly as the sticky session setting is required in order that Kinesis reaches the correct HF/Indexer to check the acknowledgement. Regarding the endpoint/service behind the ELB - This can be either HF or your indexer cluster, depending on your configuration. You should also install the Splunk Add-on for Amazon Web Services (AWS) which has the appropriate field extractions etc *if you are sending AWS data*. If you are sending your own application data then this may not be required, this depends on the processing done within Kinesis.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@gcusello but why can't we use HEC token here? Please help me with disadvantages so that I can discuss with my team