All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Can we use Splunk Add-on for AWS for free or it requires a license to be used with Splunk Enterprise free trial?
Thanks for the explanation, it was educational. I have corrected the plus operators where appropriate. On the other hand, I have now triple-checked, and in deed, multiple leading whitespaces are ign... See more...
Thanks for the explanation, it was educational. I have corrected the plus operators where appropriate. On the other hand, I have now triple-checked, and in deed, multiple leading whitespaces are ignored in the FORMAT string. But yes, it would seem that Splunk or whoever wrote the SC4S config assumed that they would be honored.
i have same issue, have you fixed it?
I solved the problem using REX  (\"|)Rule:\s*(?P<Rule>.*?)(?:(\"))?\  
Hi here is couple of links to old answers where we are discussed this. https://community.splunk.com/t5/Deployment-Architecture/Right-number-and-size-of-hot-warm-cold-buckets/m-p/681358 https://co... See more...
Hi here is couple of links to old answers where we are discussed this. https://community.splunk.com/t5/Deployment-Architecture/Right-number-and-size-of-hot-warm-cold-buckets/m-p/681358 https://community.splunk.com/t5/Deployment-Architecture/Hot-Warm-Cold-bucket-sizing-How-do-I-set-up-my-index-conf-with/m-p/634696 https://community.splunk.com/t5/Deployment-Architecture/Index-rolling-off-data-before-retention-age/m-p/684799 https://community.splunk.com/t5/Splunk-Enterprise/Why-do-we-have-warm-buckets/m-p/700835 Some of those are little bit out of direct scope of your question, but still those give to you better understanding how this is working. r. Ismo 
My friend and I have the same indexes.conf, but why are the bucket sizes being created different? Mine is around 1MB, but my friend's are created in 5.x MB units.. indexes.conf [volume:hot] path = /... See more...
My friend and I have the same indexes.conf, but why are the bucket sizes being created different? Mine is around 1MB, but my friend's are created in 5.x MB units.. indexes.conf [volume:hot] path = /data/HOT maxVolumeDataSizeMB = 100 [volume:cold] path = /data/COLD maxVolumeDataSizeMB = 100 [lotte] homePath = volume:hot/lotte/db coldPath = volume:cold/lotte/colddb maxDataSize = 1 maxTotalDataSizeMB = 200 thawedPath = $SPLUNK_DB/lotte/thaweddb    
On comment. You shouldn't ever use # on end of any attribute lines in any *.conf files. Splunk cannot handle those correctly! All comments must be on own lines!
Thank you for your reply. The core requirement I want to achieve is to enable the same user or department to access the same account multiple times within 7 consecutive days, with each visit counted ... See more...
Thank you for your reply. The core requirement I want to achieve is to enable the same user or department to access the same account multiple times within 7 consecutive days, with each visit counted as 1 time per day. Finally, filter out those that have been visited for more than 2 days. For example, the first query started on January 1, 2019 at 00:00:00 and ended on January 8 at 00:00:00, the second query started on January 2, 2019 at 00:00:00 and ended on January 9 at 00:00:00, and so on to achieve this requirement. The SPL submitted above is based on the core calculation in the map. The earliest time calculated in the map is based on a 7-day logic to generate periodic data.
Can you explain what you try to do in English not with SPL?
Hi @ITWhisperer ,   I found your answer really helpful other day. now I am facing one small issue in it.   The query is adding the time(number of seconds) of previous occurrences in dashboard... See more...
Hi @ITWhisperer ,   I found your answer really helpful other day. now I am facing one small issue in it.   The query is adding the time(number of seconds) of previous occurrences in dashboard.   my requirement is, query should show the host name with date and number of seconds of downtime on that particular date.   current query is:  index="index1" |search "slot" | rex field=msg "VF\s+slot\s+(?<slot_number>\d+)" | dedup msg | sort _time,host | stats range(_time) as downtime by host,slot_number   here basically I am calculating network card slot downtime which occured in servers with number of seconds can you please help me with modifying the query?  
@anmohan0  You should be able to do it with some javascript and css. There's a bit of a how-to here: https://www.splunk.com/en_us/blog/tips-and-tricks/using-bootstrap-modal-with-splunk-simple-xml.ht... See more...
@anmohan0  You should be able to do it with some javascript and css. There's a bit of a how-to here: https://www.splunk.com/en_us/blog/tips-and-tricks/using-bootstrap-modal-with-splunk-simple-xml.html?301=/blog/2014/02/24/using-bootstrap-modal-with-splunk-simple-xml.html&locale=en_us 
Thanks a lot @ITWhisperer , you saved me and it works seamlessly the way I wanted
Thanks @P_vandereerden and it worked as the way I wanted.
Hi, In my Splunk Dashboard, there are few drop-down inputs and a submit button to submit the tokens for the search query, but I would like to have a popup box to reconfirm or cancel while clicking o... See more...
Hi, In my Splunk Dashboard, there are few drop-down inputs and a submit button to submit the tokens for the search query, but I would like to have a popup box to reconfirm or cancel while clicking on Submit button. Will this be possible, please can someone help?
We use the map function to query data, and both July and March data can be queried separately to obtain results. However, selecting the time as March to July will result in a regular display of only ... See more...
We use the map function to query data, and both July and March data can be queried separately to obtain results. However, selecting the time as March to July will result in a regular display of only March data and loss of July results. The impact is significant now, and we hope you can help us check, or if we can implement it in a different way. I use SPL as follows: index=edws sourcetype=edwcsv status="是" | stats earliest(_time) as earliest_time latest(_time) as latest_time | eval earliest_time=strftime(earliest_time, "%F 00:00:00") | eval latest_time=strftime(latest_time, "%F 00:00:00") | eval earliest_time=strptime(earliest_time, "%F %T") | eval earliest_time=round(earliest_time) | eval latest_time=strptime(latest_time, "%F %T") | eval latest_time=round(latest_time) | addinfo | table info_min_time info_max_time earliest_time latest_time | eval searchEarliestTime=if(info_min_time == "0.000",earliest_time,info_min_time ) | eval searchLatestTime=if(info_max_time="+Infinity", relative_time(latest_time,"+1d"), info_max_time) | eval start=mvrange(searchEarliestTime, searchLatestTime, "1d") | mvexpand start | eval end=relative_time(start,"+7d") | eval alert_date=relative_time(end,"+1d") | eval a=strftime(start, "%F") | eval b=strftime(end, "%F") | eval c=strftime(alert_date, "%F") | fields start a end b c | map search="search earliest=\"$start$\" latest=\"$end$\" index=edws sourcetype=edwcsv status="是" | bin _time span=1d | stats dc(_time) as "访问敏感账户次数" by date day name department number | eval a=$a$ | eval b=$b$ | eval c=$c$ | stats sum(访问敏感账户次数) as count,values(day) as "查询日期" by a b c name number department " maxsearches=500000 | where count > 2
Your help was very much appreciated.
@jiaminyun If you find this solution satisfactory, please proceed to accept it.
@jiaminyun  Splunk prioritizes evaluating the total data size in the index against the `maxTotalDataSizeMB` parameter. If the total size exceeds the defined limit, Splunk will begin deleting the old... See more...
@jiaminyun  Splunk prioritizes evaluating the total data size in the index against the `maxTotalDataSizeMB` parameter. If the total size exceeds the defined limit, Splunk will begin deleting the oldest buckets, regardless of whether they satisfy the retention period defined by `frozenTimePeriodInSecs`. Conversely, if the data size remains within the specified limit, the system will then assess buckets based on the `frozenTimePeriodInSecs` parameter to archive or delete those exceeding the time threshold. To ensure consistent data retention for a specific duration (e.g., 200 days), it is essential to configure `maxTotalDataSizeMB` to accommodate the anticipated volume of data for the desired retention period.
@jiaminyun   The priority between frozenTimePeriodInSecs and maxTotalDataSizeMB can be understood as follows: maxTotalDataSizeMB Takes Precedence: If the index size exceeds maxTotalDataSizeMB befo... See more...
@jiaminyun   The priority between frozenTimePeriodInSecs and maxTotalDataSizeMB can be understood as follows: maxTotalDataSizeMB Takes Precedence: If the index size exceeds maxTotalDataSizeMB before reaching the time set in frozenTimePeriodInSecs, the data will be rolled to frozen state based on the size limit. http://docs.splunk.com/Documentation/Splunk/latest/Indexer/Setaretirementandarchivingpolicy
@Cccvvveee0235  Great.