All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @saad76, I’m a Community Moderator in the Splunk Community. This question was posted 9 years ago, so it might not get the attention you need for your question to be answered. We recommend that... See more...
Hi @saad76, I’m a Community Moderator in the Splunk Community. This question was posted 9 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
Hi all, I have a created a dashboard, in that dashboard  i added a text filter, to that text filter i need to add place holder like below Thanks in Advance!    
You are going to see log that provides correct information about bytes used that is less than maxQueueSize. This is useful to find slow receivers and allows you to fix the configs to avoid queue bloc... See more...
You are going to see log that provides correct information about bytes used that is less than maxQueueSize. This is useful to find slow receivers and allows you to fix the configs to avoid queue blocking. What is not expected is log reporting bytes more than maxQueueSize. Example 18446603427033668018 bytes. Example below log is expected and you rather want to fix configs to not let one receiver use all queue. WARN AutoLoadBalancedConnectionStrategy [xxxx TcpOutEloop] - Current dest host connection nn.nn.nn.nnn:9997, oneTimeClient=0, _events.size()=41, _refCount=2, _waitingAckQ.size()=5, _supportsACK=1, _lastHBRecvTime=Thu Jun 20 12:07:44 2023 is using  25214400 bytes. Total tcpout queue size is 26214400. Warningcount=841
Can you share the log?
@sudosplunk the setting is expected to be in inputs.conf ( not in custom inputs.conf). All modinputs honor meta setting starting 6.4 https://community.splunk.com/t5/Getting-Data-In/UF-Route-inputs-... See more...
@sudosplunk the setting is expected to be in inputs.conf ( not in custom inputs.conf). All modinputs honor meta setting starting 6.4 https://community.splunk.com/t5/Getting-Data-In/UF-Route-inputs-to-specific-indexers-based-on-the-data-s-input/m-p/147597
Nearly 10 years later, I have the same issue and still no solution!
In our case we have some CRON scripts and subsequently Ansible playbooks to do this but essentialy it is just doing +X on the permission or FACLS of the script files. 
Hi @auzark , the main issue is that the app menu and the drilldown urls point to the name of a dashboard (not to the title as reports), so, if you clone a dashboard you have to modify all pointing u... See more...
Hi @auzark , the main issue is that the app menu and the drilldown urls point to the name of a dashboard (not to the title as reports), so, if you clone a dashboard you have to modify all pointing urls. So, my hint is to clone the current dashboard to <dashboard_name>_old (to save the old version) and then modify the current one, so you don't need to modify urls. Ciao. Giuseppe
I have multiple Dashboards that I have cloned to make changes.  What is the best method to rename the existing dashboards out to old and rename the new dashboards in? 1)rename the current to old ... See more...
I have multiple Dashboards that I have cloned to make changes.  What is the best method to rename the existing dashboards out to old and rename the new dashboards in? 1)rename the current to old 2)rename the new to current. or do I have to do the below. 1)clone the current to old 2)delete current 3) clone the new into current 4) delete the new  
Averages are quite easy to deal with, so if you start with average per minute you can then calcuate the average per hour, which is simply average per minute * 60, so there are actually a number of wa... See more...
Averages are quite easy to deal with, so if you start with average per minute you can then calcuate the average per hour, which is simply average per minute * 60, so there are actually a number of ways you can do this ... search... ``` Count by 1 minute ``` | bin _time span=1m | stats count by _time ``` Fill any gapes in the data ``` | makecontinuous _time span=1m | fillnull count ``` Now take the max/avg/sum of PM values for 1 hour ``` | bin _time span=1h | stats max(count) as max_pm avg(count) as avg_pm sum(count) as sum_pm by _time ``` Now do some final calcs to get total sum ``` | stats max(max_pm) as max_pm sum(sum_pm) as total max(sum_pm) as max_ph avg(sum_pm) as avg_ph ``` And this last one just calculates the final per minute, which is total divided by the number of minutes ``` | addinfo | eval avg_pm=total / ((info_max_time - info_min_time) / 60) | fields - info_* total | fields *pm *ph ``` And now convert that table into your table ``` | eval "Per Minute"=mvjoin(mvappend(avg_pm, max_pm), ":") | eval "Per Hour"=mvjoin(mvappend(avg_ph, max_ph), ":") | fields Per* | transpose 0 | rex field="row 1" "(?<Average>[^:]*):(?<Maximum>.*)" | fields - "row 1" There are always a number of ways to do things in Splunk - this is somewhat convoluted, but will probably perform well if you are dealing with a lot of data. I am sure there is a simpler way also, but sometimes it's useful to show that almost anything is possible in Splunk.  
Hi @myte, if your want the format you shared (Per hour and Per minute in different rows) you could use this solution, even if isn't so efficient: |.... main search... | bucket _time span=1h | stats... See more...
Hi @myte, if your want the format you shared (Per hour and Per minute in different rows) you could use this solution, even if isn't so efficient: |.... main search... | bucket _time span=1h | stats count BY _time | stats avg(count) AS AverageCount max(count) AS MaxCount | eval AverageCount=round(AverageCount,2), MaxCount=round(MaxCount,2), Type="Per Hour" | append [ |.... main search... | bucket _time span=1m | stats count BY _time | stats avg(count) AS AverageCount max(count) AS MaxCount | eval AverageCount=round(AverageCount,2), MaxCount=round(MaxCount,2), Type="Per Minute" ] | stats values(AverageCount) AS AverageCount values(MaxCount) AS MaxCount BY Type Ciao. Giuseppe
so the searchers are   | inputlookup duerr_counters.csv | search category="blsoft_total" | rename "Company Code" as CompanyCode | lookup location_map.csv CompanyCode OUTPUTNEW Lat, Long | rename ... See more...
so the searchers are   | inputlookup duerr_counters.csv | search category="blsoft_total" | rename "Company Code" as CompanyCode | lookup location_map.csv CompanyCode OUTPUTNEW Lat, Long | rename CompanyCode as "Company Code" | lookup lkp-GlobalIpRange "Company Code" OUTPUTNEW Niederlassung | fields "Company Code" Region category Lat Long Niederlassung | geostats globallimit=93 latfield=Lat longfield=Long count by Niederlassung | inputlookup duerr_counters.csv | search category="blsoft_exceptions" | rename "Company Code" as CompanyCode | lookup location_map.csv CompanyCode OUTPUTNEW Lat, Long | rename CompanyCode as "Company Code" | lookup lkp-GlobalIpRange "Company Code" OUTPUTNEW Niederlassung | fields "Company Code" Region category Lat Long Niederlassung | geostats globallimit=93 latfield=Lat longfield=Long count by Niederlassung only 2 line is different
Hi all, new here, so go easy!   I have a dashboard with many panels, time picker and so on. I'm hoping that I can use multiple time buckets for event counts in one table, for example span=1h AND ... See more...
Hi all, new here, so go easy!   I have a dashboard with many panels, time picker and so on. I'm hoping that I can use multiple time buckets for event counts in one table, for example span=1h AND span=1m. At this stage I'm not bothered about events that might take longer than each span.   The result I'm looking for:   Average count Max count Per hour {results} {results} Per minute {results} {results}   I can get this to work with two separate tables but combining into one is proving to be a challenge. If at all even possible.   For example, for per hour I have: |.... main search... | bucket _time span=1h as perHour | stats count as "eventsPerHour" by perHour | stats avg(eventsPerHour) as avEventsPerHour max(eventsPerHour) as maxEventsPerHour | eval avEventsPerHour=round(avEventsPerHour,2) | rename avEventsPerHour as "Average count per hour" maxEventsPerHour as "Maximum count in one hour"   Which gives: Average count per hour Maximum count in one hour {results} {results}     Any pointers in the right direction much appreciated. 
hi @Siddharthnegi , let me know if my hint solve your requirement. If you share your searches I culd help you. Otherwise, please accect one solution for the other people of Community. Ciao. Gius... See more...
hi @Siddharthnegi , let me know if my hint solve your requirement. If you share your searches I culd help you. Otherwise, please accect one solution for the other people of Community. Ciao. Giuseppe P.S.: Karma Points are appreciated
Hi @corti77 , you have to reset the mongod.lock: stop splunkd delete $SPLUNK_HOME\var\lib\splunk\kvstore\mongo\mongod.lock" start splunkd I asked to Splunk ideas to create a script od a dedica... See more...
Hi @corti77 , you have to reset the mongod.lock: stop splunkd delete $SPLUNK_HOME\var\lib\splunk\kvstore\mongo\mongod.lock" start splunkd I asked to Splunk ideas to create a script od a dedicated solution to this frequent issue, if you think as me that's important, please vote for this idea at https://ideas.splunk.com/ideas/EID-I-2058 Ciao. Giuseppe
Do you mean symbols scammers use to spoof Latin letters and letters in other writing systems?  This question came up earlier, too.  No, those cannot be characterized as non-ASCII characters in everyd... See more...
Do you mean symbols scammers use to spoof Latin letters and letters in other writing systems?  This question came up earlier, too.  No, those cannot be characterized as non-ASCII characters in everyday sense.  Those are UTF-8 representation of letters of a totally different language that look like letters in a familiar language such as English.  There is no built-in function in any programming language that can map those spoofs into English or map English letters into all variants of spoofs. Your solution, if you want to build one, is to create a lookup table with, say, English letters as column "English", French letters as column "French", then populate a lookup key with all possible letters that can spoof each of these letters, including "English" A ← spoof A (real English A).  Then, use Splunk lookup to figure our if any UTF-symbol that appears in a text matches a meaningful letter in a given language. It will be a real project to produce this table, although I fancy there is Internet help from Dr. Google. It will be computationally expensive because you need to test lots of hypotheses like a real fraud analyst. Hope this helps. (For real fraud analyst, it is probably cheaper to run the image of the "text" through an OCR and see what they resemble.)
yes all the searches are same only 1 line is different. eg search is same only 1 line is different like in search 1 i am using | search category = os_client in search 2 i am using | search ca... See more...
yes all the searches are same only 1 line is different. eg search is same only 1 line is different like in search 1 i am using | search category = os_client in search 2 i am using | search category = os_server Like this. Is this helpful
Hi in addition to using tokens as was suggested you can template the body of the message by configuring alert_actions.conf For example you could have an alert_actions.conf (don't update the etc/sy... See more...
Hi in addition to using tokens as was suggested you can template the body of the message by configuring alert_actions.conf For example you could have an alert_actions.conf (don't update the etc/system/default one) and add   [email] message.alert = Please fill this report in as follows\ Step 1\ Step 2\ \ And finally\ Step 3     I use the \ because you want the message.alert value to span multiple lines. That should get you something like the attached screenshot.   More details here  https://docs.splunk.com/Documentation/Splunk/latest/Admin/Alertactionsconf
Hi, I run splunk 9.0.8 and after an issue with our storage (LUN full). I had to full scan the disk and successfully repaired the filesystem. splunk starts now but there is a persistent error with t... See more...
Hi, I run splunk 9.0.8 and after an issue with our storage (LUN full). I had to full scan the disk and successfully repaired the filesystem. splunk starts now but there is a persistent error with the KV store. the status is the following: show kvstore-status This member: backupRestoreStatus : Ready disabled : 0 guid : E59FAAA8-D66A-498E-8DDF-0F5C29866F95 port : 8191 standalone : 1 status : failed storageEngine : wiredTiger Any suggestion on how could get an operational status? many thanks. Jose