All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You definitely should read what Harendra said!
I followed these steps to add new box to existing SHC, everything looks fine on the SHC side but kvstore status has been 'status : starting' since, and it's been over an hour. Is this normal or I mis... See more...
I followed these steps to add new box to existing SHC, everything looks fine on the SHC side but kvstore status has been 'status : starting' since, and it's been over an hour. Is this normal or I missed something?
When you have more than one target where you are sending then if any of those will blocked the traffic then all traffic will be blocked quite soon after that. Basically after blocked targets queue is ... See more...
When you have more than one target where you are sending then if any of those will blocked the traffic then all traffic will be blocked quite soon after that. Basically after blocked targets queue is full then all other targets will be blocked. This is default behavior of splunk. There are two options which you could change to change this behavior but it means that probability of lost some event will increase.
Hi @kiran_panchavat  I challenge that this is either incorrect or missing some context. I appreciate that this is the sort of thing PS get involved with but I know a number of customers who have man... See more...
Hi @kiran_panchavat  I challenge that this is either incorrect or missing some context. I appreciate that this is the sort of thing PS get involved with but I know a number of customers who have managed this themselves, as once it is in DDSS it isnt much different to a standard thaw process. Infact the process is detailed in the public docs (Restore indexed data from a self-storage location) with a step-by-step process which does not reference requirement for PS. I created a script to convert DDSS to SmartStore for a customer who wanted a small on-prem SH to be able to access old data which you might find useful https://github.com/livehybrid/ddss-restore  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@MuSThat got me part of the way there but I think I may have accidentally oversimplified my question a bit. I'll post another question to get the 2nd half answered. Thanks for the help!
According to the Splunk Cloud Overview Technical Enablement, Splunk recommends engaging Professional Services.. @livehybrid     
Hi @Cheng2Ready  In the SPL you have shared you are appending a makeresult with count=0, then stats sum(count) as total, but then in your WHERE clause you have total>1 OR total=0 If total=0 which I... See more...
Hi @Cheng2Ready  In the SPL you have shared you are appending a makeresult with count=0, then stats sum(count) as total, but then in your WHERE clause you have total>1 OR total=0 If total=0 which I guess it will, then you will still get 1 result returned, even if the returned result says total=0 it will still match the criteria of No. of events >0. I think you meant to close the append after the eval count=0. Can you try the following instead? index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] | stats count | append [ | makeresults | eval count=0 | fields count) ] | stats sum(count) AS total | where total>1 Although I'm confused as to why you couldnt do this? index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] | stats count | where count>0  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @kaeleyt  Use the Splunk lookup feature by saving Dataset 2 (ip-to-hostname mapping) as a CSV lookup file and then using the lookup command to enrich Dataset 1. This fully bypasses subsearch, joi... See more...
Hi @kaeleyt  Use the Splunk lookup feature by saving Dataset 2 (ip-to-hostname mapping) as a CSV lookup file and then using the lookup command to enrich Dataset 1. This fully bypasses subsearch, join, and append limits.   Create CSV lookup table from Dataset 2   index=my_hosts | stats values(hostname) as hostname by ip | outputlookup ip_to_hostname.csv   Join Dataset 1 and lookup to enrich logs with hostnames   index=my_logs | stats count by ip | lookup ip_to_hostname.csv ip OUTPUT hostname | table ip, count, hostname   The lookup command does not have the same limiting factors as join, append, or subsearch for reasonable file sizes, you could use either CSV or kvstore lookups.   If Dataset 2 changes regularly you could overwrite the lookup via a scheduled search. For very large lookups, Splunk recommends KV store lookups for scale, but CSV lookups generally perform well up to 1M+ rows. Confirm that the field names (ip, hostname) match exactly between lookup and base data.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi there, Oldy but goldy   https://community.splunk.com/t5/Splunk-Search/How-to-compare-fields-over-multiple-sourcetypes-without-join/m-p/113477#M29849 Hope this helps ... Cheers, MuS
Professional Services is not required to configure or use DDSS. If you are moving away from Splunk Cloud then a customer can setup DDSS using an sc_admin account via the Web UI and/or ACS and then c... See more...
Professional Services is not required to configure or use DDSS. If you are moving away from Splunk Cloud then a customer can setup DDSS using an sc_admin account via the Web UI and/or ACS and then configure their indexes to use the DDSS location.  To migrate away from Splunk Cloud the customer will then need to reduce the retention on these indexes which will trigger existing buckets from DDAS (Active Searchable) to roll to "frozen" (DDSS).  At this point the buckets in S3 are the same as any other frozen bucket from Splunk Enterprise or Splunk Cloud and can be thawed. (see https://docs.splunk.com/Documentation/Splunk/9.4.2/Indexer/Restorearchiveddata) If only the raw data is required then this can be extracted from the journal.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @_olivier_  I certainly don't doubt my PS colleagues in their recommendations, as they will certainly have more information that I do about this particular set up, but wanted to make sure you kne... See more...
Hi @_olivier_  I certainly don't doubt my PS colleagues in their recommendations, as they will certainly have more information that I do about this particular set up, but wanted to make sure you knew about the queue etc. If its helpful, there is a really good explanation of pipelines at https://community.splunk.com/t5/Getting-Data-In/How-many-pipelines-should-I-use-on-a-forwarder/m-p/410734 which is worth a read.  Interesting that you say about sending to two platforms, as it does sound like congestion outbound from the UF rather than an actual issue with the parsing part of the pipeline. It might be worth (if possible) monitoring the network egress to confirm its not hitting a ceiling, and also check if either of the two outputs are blocking (Check the _internal logs in both platforms for "TcpOutputProc" errors. Another resource worth checking is https://conf.splunk.com/files/2019/slides/FN1570.pdf which might also help.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Situation: I have 2 data sets: Dataset 1 is a set of logs which includes IP addresses. When aggregated, there are 200,000+ IP addresses. Dataset 2 is a dataset we are pulling in once a day which in... See more...
Situation: I have 2 data sets: Dataset 1 is a set of logs which includes IP addresses. When aggregated, there are 200,000+ IP addresses. Dataset 2 is a dataset we are pulling in once a day which includes identifying information for those IP addresses including hostname for example. This dataset is even larger. I'm wanting to map the hostname from Dataset 2 to the IP address in Dataset 1. I feel like I've tried everything (join, append + eventstats, subsearching) and unfortunately all have a limit which prevent me from getting the full set mapped. Join limit: 50,000 Append limit: 10,000 Subsearch limit: 10,000 I've come across this same sort of issue before and have dropped projects because there doesn't seem to be an obvious way to get around these limits without increasing limits like the subsearch_maxout for example for our whole environment by at least 10x. I've started looking into the map command but the documentation seems extremely vague on the limits ("Zero ( 0 ) does not equate to unlimited searches.") The only thing I've gotten to work is to essentially manually break the 2nd data source up into groups of  10000 or less rows and append + eventstats each group of 10,000 one by one by one which is a complete nightmare of a query if you can imagine that plus, additional appends need to be created anytime the 2nd data set changes or grows. I'm growing tired of not having a good way of tackling this issue so I'm seeking any advice from any fellow Splunkers that have successfully "joined" larger datasets.   Some example searches to help with the situation: Dataset 1 search: index=my_logs | stats count by ip  Dataset 2 search: index=my_hosts | stats values(hostname) as hostname by ip    
Hi @braxton839  Just to check, this is a HF, not a UF right? And doesnt come through another HF before reaching this? Are you able to confirm what sourcetype the syslog input is set to on this host... See more...
Hi @braxton839  Just to check, this is a HF, not a UF right? And doesnt come through another HF before reaching this? Are you able to confirm what sourcetype the syslog input is set to on this host? Im assuming its "juniper" but if its anything else then the props.conf stanza I supplied would need updating.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@gcusello the Alert still fired    
Thank you for your advice
@gcusello "at first check that the date format is the same both in events (after eval command) and in lookup" This is what I have in the look up  
Hi @Cheng2Ready , at first check that the date format is the same both in events (after eval command) and in lookup, then try inserting in the lookup a test date that you're sure to have events. A... See more...
Hi @Cheng2Ready , at first check that the date format is the same both in events (after eval command) and in lookup, then try inserting in the lookup a test date that you're sure to have events. At least, don't use this condition in the alert: put the condition inside the alert search and not in the alert definition, in other words: in alert definition use results>0 and use this search: index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] | stats count | append [ | makeresults | eval count=0 | fields count) | stats sum(count) AS total | where total>1 OR total=0 in ths way, removing the final condition, you can check your search results before the alerting. Ciao. Giuseppe
What I have read and understand based on many discussions is that even there is several pipelines those all share only one input part. I have gotten understanding that there is one input and pipelines... See more...
What I have read and understand based on many discussions is that even there is several pipelines those all share only one input part. I have gotten understanding that there is one input and pipelines start after that and it’s possible that this inputs will be blocked which also blocks other pipelines.
This is what I have setup index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] The search always returns 1 event. The Alert Co... See more...
This is what I have setup index=xxxxxx | eval HDate=strftime(_time,"%Y-%m-%d") | search NOT [ | inputlookup Date_Test.csv | fields HDate ] The search always returns 1 event. The Alert Condition is: if it see's  more than 1 event OR  0 event trigger an alert. Issue I'm facing now is on the Lookup table dates Lets say I have it setup on April 14th in my Lookup table file "Date_Test.csv" On April the 14th Still fired an alert, I'm not sure if its because it see 0 events ?  It suppose to Mute on that day. any insight and help would be much appreciated  
@ssuluguri  If you enables Dynamic Data Self-Storage (DDSS) to export your aged ingested data, the oldest data is moved to the Amazon S3 account in the same region as their Splunk Cloud deployment b... See more...
@ssuluguri  If you enables Dynamic Data Self-Storage (DDSS) to export your aged ingested data, the oldest data is moved to the Amazon S3 account in the same region as their Splunk Cloud deployment before it is deleted from the index. You are responsible for AWS payments for the use of the Amazon S3 account. When data is deleted from the index, it is no longer searchable by Splunk Cloud. Customers are responsible for managing DDSS and a non Splunk Cloud stack for searching archived data. This is a manual process and customers will require a professional services engagement. https://docs.splunk.com/Documentation/SplunkCloud/latest/Admin/DataSelfStorage  NOTE: DDSS Data Egress - No limit - Export 1 TB/hr; Must be in the same region as the indexing tier https://www.splunk.com/en_us/blog/platform/dynamic-data-data-retention-options-in-splunk-cloud.html