All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @man03359 .. the metricName can be either CPUPercentage or MemoryPercentage.  and then, how do you get the value of either CPUPercentage or MemoryPercentage   or.. if you have the values for ei... See more...
Hi @man03359 .. the metricName can be either CPUPercentage or MemoryPercentage.  and then, how do you get the value of either CPUPercentage or MemoryPercentage   or.. if you have the values for either CPUPercentage or MemoryPercentage.. then you should be able to run: index=idx-cloud-azure "*09406b3b-b643-4e86-876e-4cd5f5a8be57*" | chart count by index, metricName | where CpuPercentage > 85 AND MemoryPercentage > 85  when you run this Search query, do you get results as you expected ah.. if yes, then you can save it as an alert.  Please let us know if this about search works fine.. if its not working, pls update us how to get the values of either cpu or memory percentage. thanks. 
Hi @AL3Z .. in linux, using the find and grep commands.. you can find all the blacklisted lines recursively.     find . -name '*.conf' -exec grep -i 'blacklist' {}\; -print   grep -Ril "text-to-f... See more...
Hi @AL3Z .. in linux, using the find and grep commands.. you can find all the blacklisted lines recursively.     find . -name '*.conf' -exec grep -i 'blacklist' {}\; -print   grep -Ril "text-to-find-here" / i stands for ignore case (optional in your case). R stands for recursive. l stands for "show the file name, not the result itself". / stands for starting at the root of your machine.    
Hello @inventsekar, Thank you for your response and for sharing the details. I searched about healthcheckup and found it gives information related to configuration changes. I check Job Details das... See more...
Hello @inventsekar, Thank you for your response and for sharing the details. I searched about healthcheckup and found it gives information related to configuration changes. I check Job Details dashboard and fetched details of average indexer time which is around 0.9 seconds and time taken by the search to run on each indexer which varied between 0.8 to 1.5 seconds. Thus, can you please share if its suitable to run the scheduled search over Splunk infrastructure or if it can lead to negative implications and issues? The reason to ask is because its scheduled search and cannot afford search failure or infrastructure breakdown, and thus, want to be sure of using the approach. Thank you
Hi,  How we can list out all the apps inputs.conf blacklisted stanzas in the DS ? Coz I'm seeing the command line events are getting blocked in my Environment.. Thanks
This doesn't really answer the question. How about this (to try and clarify what your events mean): Is the count always 1? If so, it appears that average, minimum, maximum and total will always be ... See more...
This doesn't really answer the question. How about this (to try and clarify what your events mean): Is the count always 1? If so, it appears that average, minimum, maximum and total will always be the same number, right? That is, any one of them could be used as the value for the event? If not, which value do you want to use as the value for the event?
@ITWhisperer  Yes, each event has metricName, listed like this- CpuPercentage and MemoryPercentage are one of the values of metricName. The query has to be built in such a way that it calculat... See more...
@ITWhisperer  Yes, each event has metricName, listed like this- CpuPercentage and MemoryPercentage are one of the values of metricName. The query has to be built in such a way that it calculates the % of CPU utilization and throws an alert when the CPUPercentage is more than 85%., similarly for MemoryPercentage also.
Yes, I did The thing is, the that any new font that I put into the fonts folder gives me gibberish...  
In splunkdev2 we are trying to implement SAML authentication for that we have configuration with .pem certificate but after applying portal is not working. As per the guidance of team they suggested... See more...
In splunkdev2 we are trying to implement SAML authentication for that we have configuration with .pem certificate but after applying portal is not working. As per the guidance of team they suggested to upgrade splundev server key store to trust the public signing authority Digital Certificate.  How can we do that?
Dears, kindly, support to make paloalto app to work the logs are coming to environment but the app is not working as you can see in pictures  thank you in advance   
Hi @gayathrc ... I believe you have some network devices, you want to monitor/send the network devices logs to Splunk.  if so, you may use a syslog tool, to forward the logs to a "heavy forwarder"(H... See more...
Hi @gayathrc ... I believe you have some network devices, you want to monitor/send the network devices logs to Splunk.  if so, you may use a syslog tool, to forward the logs to a "heavy forwarder"(HF), and then, from HF, you can send the logs to Splunk indexer.  if this is just a small POC project or use-case testing, then, you can achieve it without HF(or even without syslog)(but there will be data loss issues). Please provide some more details about the requirements, thanks.   
Hello @isoutamo, Thank you for your response. The scheduled SPL is a report scheduled to run each minute to fetch datapoints for different conditions in each iteration and store them in lookup file... See more...
Hello @isoutamo, Thank you for your response. The scheduled SPL is a report scheduled to run each minute to fetch datapoints for different conditions in each iteration and store them in lookup file so that later I can use the lookup file to directly read data for my analysis and also in dashboards and alerts. This helps to access data faster and load dashboard with all relevant information with lesser time than reading and displaying the data from index. This also helps to access required historical data faster than fetching from index. So as I understand from your inputs, in my current situation all relevant Splunk components are reserved for the scheduled search each minute for 4 seconds for the period of 1 hour. And thus, it works as a normal historical scheduled search with no extra resource workload or causing slowness, breakdowns in Splunk infrastructure due to its implementation. Please share if there is anything else that I need to correct or understand with this implementation. Thank you
Hi @Taruchit ... Some details about Splunk searches: On the Monitoring Console, you can run a healthcheck, which will tell you how the systems performance looks like.  The limits.conf file got sett... See more...
Hi @Taruchit ... Some details about Splunk searches: On the Monitoring Console, you can run a healthcheck, which will tell you how the systems performance looks like.  The limits.conf file got settings for controlling how many searches the search head can run(for all types - real time, historical, concurrent, etc)   https://docs.splunk.com/Documentation/Splunk/9.1.1/admin/limitsconf#limits.conf.example [scheduler] # Percent of total concurrent searches that will be used by scheduler is # total concurrency x max_searches_perc = 20 x 60% = 12 scheduled searches # User default value (needed only if different from system/default value) when # no max_searches_perc.<n>.when (if any) below matches. max_searches_perc = 60 # Increase the value between midnight-5AM. max_searches_perc.0 = 75 max_searches_perc.0.when = * 0-5 * * * # More specifically, increase it even more on weekends. max_searches_perc.1 = 85 max_searches_perc.1.when = * 0-5 * * 0,6 # Maximum number of concurrent searches is enforced cluster-wide by the # captain for scheduled searches. For a 3 node SHC total concurrent # searches = 3 x 20 = 60. The total searches (adhoc + scheduled) = 60, then # no more scheduled searches can start until some slots are free. shc_syswide_quota_enforcement = true  
Hi @ChrisValibia ... the Splunk enterprise (on-prim) supports "Duo" multi factor authenticator.  and Splunk Cloud does not support Duo or any multi factor authenticators.    Splunk Docs for your r... See more...
Hi @ChrisValibia ... the Splunk enterprise (on-prim) supports "Duo" multi factor authenticator.  and Splunk Cloud does not support Duo or any multi factor authenticators.    Splunk Docs for your reference: https://docs.splunk.com/Documentation/Splunk/9.1.1/Security/AboutMultiFactorAuth About multifactor authentication with Duo Security Multifactor authentication lets you configure a primary and secondary login for your Splunk Enterprise users. Duo Security multifactor authentication secures Splunk Web logins. Splunk Cloud Platform does not support multifactor authentication with Duo Security.
Hi,  I was wondering if it was in any way possible to run scheduled browser tests on the splunk observability platform. I need tests to run from 2pm - 5pm however I can't seem to find a way to have ... See more...
Hi,  I was wondering if it was in any way possible to run scheduled browser tests on the splunk observability platform. I need tests to run from 2pm - 5pm however I can't seem to find a way to have them run during only those times. 
Try something like this (to avoid joins) index=main source="/media/ssd1/ip_command_log/command_log.log" | eval exec_time=strptime(exec_time, "%a %b %d %H:%M:%S %Y") | rename ip_execut as Access_IP |... See more...
Try something like this (to avoid joins) index=main source="/media/ssd1/ip_command_log/command_log.log" | eval exec_time=strptime(exec_time, "%a %b %d %H:%M:%S %Y") | rename ip_execut as Access_IP | table Access_IP, exec_time, executed_command | append [ search index=main source="/media/ssd1/splunk_wtmp_output.txt" | dedup Access_time | eval Access_time=strptime(Access_time, "%a %b %d %H:%M:%S %Y") | eval Logoff_time=if(Logoff_time="still logged in", now(), strptime(Logoff_time, "%a %b %d %H:%M:%S %Y")) | table Access_IP, Access_time, Logoff_time ] | eval event_time=coalesce(Access_time, exec_time) | sort 0 event_time | streamstats global=f latest(Access_time) as Access_time latest(Logoff_time) as Logoff_time by Access_IP | where exec_time>=Access_time AND exec_time<=coalesce(Logoff_time,now()) | table Access_IP, Access_time, Logoff_time, exec_time, executed_command
Hi It's like @gcusello said, but I want to add one comment. You should never use splunk as an syslog receiver even it can do it. You will lose event more or less. It's much better to use real syslog... See more...
Hi It's like @gcusello said, but I want to add one comment. You should never use splunk as an syslog receiver even it can do it. You will lose event more or less. It's much better to use real syslog servers to manage centralised syslog server. You you could use e.g. rsyslog, syslog-ng or SC4S (Syslog connector for splunk). r. Ismo
Hi As you have scheduled as a historic (not real time) search it's quite ok. It reserved those resources only that 4s time per each minute for that SPL. If it was a real time (basically you never ne... See more...
Hi As you have scheduled as a historic (not real time) search it's quite ok. It reserved those resources only that 4s time per each minute for that SPL. If it was a real time (basically you never need that) then it reserve 1cpu for all time from all nodes (SH + IDXs) what you have on your environment which are participating that query. Then it's totally another question is that every minute schedule something what you are needing? You should consider how important it's to get that alert and how fast you can react and fix the issue. Splunk is not an infrastructure monitor system, even you can use it for that! There are many other tools including Spunk IM which are better for that purpose. r. Ismo
Hello All, I have a SPL which is scheduled to run each minute for a span of 1 hour. On each execution the search runs for 4 seconds with size of around 400KB. Thus, how does the scheduler and sear... See more...
Hello All, I have a SPL which is scheduled to run each minute for a span of 1 hour. On each execution the search runs for 4 seconds with size of around 400KB. Thus, how does the scheduler and search head work in such scenario at the backend? Does the scheduled SPL keeps scheduler and search head busy for entire 1 hour? Or they are free to run the other SPLs during that span of 1 hour? And can you share any negative implications on Splunk infrastructure due to the above scheduled search? Any information would be very helpful. Thank you Taruchit
After putting the new fonts, should I make any changes to reportCIDFontList = <string> In alert_actions.conf
Thanks, Does not work.  Also know following. If  src_sg_info does not exist then we know that it's no active VPN user. Does not know how to test src_sg_info existance.  Thnaks again.    Rgds Geir... See more...
Thanks, Does not work.  Also know following. If  src_sg_info does not exist then we know that it's no active VPN user. Does not know how to test src_sg_info existance.  Thnaks again.    Rgds Geir