All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer  Yes, each event has metricName, listed like this- CpuPercentage and MemoryPercentage are one of the values of metricName. The query has to be built in such a way that it calculat... See more...
@ITWhisperer  Yes, each event has metricName, listed like this- CpuPercentage and MemoryPercentage are one of the values of metricName. The query has to be built in such a way that it calculates the % of CPU utilization and throws an alert when the CPUPercentage is more than 85%., similarly for MemoryPercentage also.
Yes, I did The thing is, the that any new font that I put into the fonts folder gives me gibberish...  
In splunkdev2 we are trying to implement SAML authentication for that we have configuration with .pem certificate but after applying portal is not working. As per the guidance of team they suggested... See more...
In splunkdev2 we are trying to implement SAML authentication for that we have configuration with .pem certificate but after applying portal is not working. As per the guidance of team they suggested to upgrade splundev server key store to trust the public signing authority Digital Certificate.  How can we do that?
Dears, kindly, support to make paloalto app to work the logs are coming to environment but the app is not working as you can see in pictures  thank you in advance   
Hi @gayathrc ... I believe you have some network devices, you want to monitor/send the network devices logs to Splunk.  if so, you may use a syslog tool, to forward the logs to a "heavy forwarder"(H... See more...
Hi @gayathrc ... I believe you have some network devices, you want to monitor/send the network devices logs to Splunk.  if so, you may use a syslog tool, to forward the logs to a "heavy forwarder"(HF), and then, from HF, you can send the logs to Splunk indexer.  if this is just a small POC project or use-case testing, then, you can achieve it without HF(or even without syslog)(but there will be data loss issues). Please provide some more details about the requirements, thanks.   
Hello @isoutamo, Thank you for your response. The scheduled SPL is a report scheduled to run each minute to fetch datapoints for different conditions in each iteration and store them in lookup file... See more...
Hello @isoutamo, Thank you for your response. The scheduled SPL is a report scheduled to run each minute to fetch datapoints for different conditions in each iteration and store them in lookup file so that later I can use the lookup file to directly read data for my analysis and also in dashboards and alerts. This helps to access data faster and load dashboard with all relevant information with lesser time than reading and displaying the data from index. This also helps to access required historical data faster than fetching from index. So as I understand from your inputs, in my current situation all relevant Splunk components are reserved for the scheduled search each minute for 4 seconds for the period of 1 hour. And thus, it works as a normal historical scheduled search with no extra resource workload or causing slowness, breakdowns in Splunk infrastructure due to its implementation. Please share if there is anything else that I need to correct or understand with this implementation. Thank you
Hi @Taruchit ... Some details about Splunk searches: On the Monitoring Console, you can run a healthcheck, which will tell you how the systems performance looks like.  The limits.conf file got sett... See more...
Hi @Taruchit ... Some details about Splunk searches: On the Monitoring Console, you can run a healthcheck, which will tell you how the systems performance looks like.  The limits.conf file got settings for controlling how many searches the search head can run(for all types - real time, historical, concurrent, etc)   https://docs.splunk.com/Documentation/Splunk/9.1.1/admin/limitsconf#limits.conf.example [scheduler] # Percent of total concurrent searches that will be used by scheduler is # total concurrency x max_searches_perc = 20 x 60% = 12 scheduled searches # User default value (needed only if different from system/default value) when # no max_searches_perc.<n>.when (if any) below matches. max_searches_perc = 60 # Increase the value between midnight-5AM. max_searches_perc.0 = 75 max_searches_perc.0.when = * 0-5 * * * # More specifically, increase it even more on weekends. max_searches_perc.1 = 85 max_searches_perc.1.when = * 0-5 * * 0,6 # Maximum number of concurrent searches is enforced cluster-wide by the # captain for scheduled searches. For a 3 node SHC total concurrent # searches = 3 x 20 = 60. The total searches (adhoc + scheduled) = 60, then # no more scheduled searches can start until some slots are free. shc_syswide_quota_enforcement = true  
Hi @ChrisValibia ... the Splunk enterprise (on-prim) supports "Duo" multi factor authenticator.  and Splunk Cloud does not support Duo or any multi factor authenticators.    Splunk Docs for your r... See more...
Hi @ChrisValibia ... the Splunk enterprise (on-prim) supports "Duo" multi factor authenticator.  and Splunk Cloud does not support Duo or any multi factor authenticators.    Splunk Docs for your reference: https://docs.splunk.com/Documentation/Splunk/9.1.1/Security/AboutMultiFactorAuth About multifactor authentication with Duo Security Multifactor authentication lets you configure a primary and secondary login for your Splunk Enterprise users. Duo Security multifactor authentication secures Splunk Web logins. Splunk Cloud Platform does not support multifactor authentication with Duo Security.
Hi,  I was wondering if it was in any way possible to run scheduled browser tests on the splunk observability platform. I need tests to run from 2pm - 5pm however I can't seem to find a way to have ... See more...
Hi,  I was wondering if it was in any way possible to run scheduled browser tests on the splunk observability platform. I need tests to run from 2pm - 5pm however I can't seem to find a way to have them run during only those times. 
Try something like this (to avoid joins) index=main source="/media/ssd1/ip_command_log/command_log.log" | eval exec_time=strptime(exec_time, "%a %b %d %H:%M:%S %Y") | rename ip_execut as Access_IP |... See more...
Try something like this (to avoid joins) index=main source="/media/ssd1/ip_command_log/command_log.log" | eval exec_time=strptime(exec_time, "%a %b %d %H:%M:%S %Y") | rename ip_execut as Access_IP | table Access_IP, exec_time, executed_command | append [ search index=main source="/media/ssd1/splunk_wtmp_output.txt" | dedup Access_time | eval Access_time=strptime(Access_time, "%a %b %d %H:%M:%S %Y") | eval Logoff_time=if(Logoff_time="still logged in", now(), strptime(Logoff_time, "%a %b %d %H:%M:%S %Y")) | table Access_IP, Access_time, Logoff_time ] | eval event_time=coalesce(Access_time, exec_time) | sort 0 event_time | streamstats global=f latest(Access_time) as Access_time latest(Logoff_time) as Logoff_time by Access_IP | where exec_time>=Access_time AND exec_time<=coalesce(Logoff_time,now()) | table Access_IP, Access_time, Logoff_time, exec_time, executed_command
Hi It's like @gcusello said, but I want to add one comment. You should never use splunk as an syslog receiver even it can do it. You will lose event more or less. It's much better to use real syslog... See more...
Hi It's like @gcusello said, but I want to add one comment. You should never use splunk as an syslog receiver even it can do it. You will lose event more or less. It's much better to use real syslog servers to manage centralised syslog server. You you could use e.g. rsyslog, syslog-ng or SC4S (Syslog connector for splunk). r. Ismo
Hi As you have scheduled as a historic (not real time) search it's quite ok. It reserved those resources only that 4s time per each minute for that SPL. If it was a real time (basically you never ne... See more...
Hi As you have scheduled as a historic (not real time) search it's quite ok. It reserved those resources only that 4s time per each minute for that SPL. If it was a real time (basically you never need that) then it reserve 1cpu for all time from all nodes (SH + IDXs) what you have on your environment which are participating that query. Then it's totally another question is that every minute schedule something what you are needing? You should consider how important it's to get that alert and how fast you can react and fix the issue. Splunk is not an infrastructure monitor system, even you can use it for that! There are many other tools including Spunk IM which are better for that purpose. r. Ismo
Hello All, I have a SPL which is scheduled to run each minute for a span of 1 hour. On each execution the search runs for 4 seconds with size of around 400KB. Thus, how does the scheduler and sear... See more...
Hello All, I have a SPL which is scheduled to run each minute for a span of 1 hour. On each execution the search runs for 4 seconds with size of around 400KB. Thus, how does the scheduler and search head work in such scenario at the backend? Does the scheduled SPL keeps scheduler and search head busy for entire 1 hour? Or they are free to run the other SPLs during that span of 1 hour? And can you share any negative implications on Splunk infrastructure due to the above scheduled search? Any information would be very helpful. Thank you Taruchit
After putting the new fonts, should I make any changes to reportCIDFontList = <string> In alert_actions.conf
Thanks, Does not work.  Also know following. If  src_sg_info does not exist then we know that it's no active VPN user. Does not know how to test src_sg_info existance.  Thnaks again.    Rgds Geir... See more...
Thanks, Does not work.  Also know following. If  src_sg_info does not exist then we know that it's no active VPN user. Does not know how to test src_sg_info existance.  Thnaks again.    Rgds Geir  
Thanks for your swift response. I need to calculate the duration between first "fail" to first "success" for every Item. Unfortunately the result is incorrect: Item StartTime EndTime Duration B  ... See more...
Thanks for your swift response. I need to calculate the duration between first "fail" to first "success" for every Item. Unfortunately the result is incorrect: Item StartTime EndTime Duration B        02:40:05 02:45:05 00:05:00 B        02:20:05 02:25:05 00:05:00 B        02:15:05 02:30:05 00:15:00     ==> should be "B  02:15:05 02:25:05 00:10:00" A        02:10:00 02:15:00 00:05:00 A        02:05:00 02:20:00 00:15:00     ==> should be "A  02:05:00 02:15:00 00:10:00" I'd tried this method before, however consecutive "Result=fail" causes overlapped results.
Hi All, For the current version of Splunk Cloud, does it allow the integration with Google Authenticator for Multi-Factor Authentication?
Hi @WK, what's the condition fro grouping? How can I recognize StartTime and EndTime? this is one of the few situation where to use the transactin command. if you want to trace when there's a Fai... See more...
Hi @WK, what's the condition fro grouping? How can I recognize StartTime and EndTime? this is one of the few situation where to use the transactin command. if you want to trace when there's a Fail and a following Success, you could try somethin like this: <your_search> | transaction Item StartsWith="Result=Fail" EndsWith="Result=Success" | eval StartTime=strftime(_time,"%H:%M:%S), EndTime=strftime(_time+duration,"%H:%M:%S), Duration=tostring(duration,"duration") | table Item StartTime EndTime Duration Ciao. Giuseppe  
  Hi All,   2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - BOM Field Name: BOM_PPMDS_1, value is out of   The above WARN message replace to ERROR me... See more...
  Hi All,   2023-10-25 10:56:46,709 WARN pool-1-thread-1 com.veeva.bpr.batchrecordprint.scheduledTasks - BOM Field Name: BOM_PPMDS_1, value is out of   The above WARN message replace to ERROR message, please find the below ERROR message.   2023-11-06 15:30:48,941 ERROR pool-1-thread-1 com.veeva.brp.batchrecordprint.ScheduledTasks - Unknown error: {errorType=GENERAL,   How to write props .conf and transforms.conf configuration files. please help me.   Regards Vijay .K        
Hi @gayathrc , I suppose that you already have your Splunk infrastrcuture, if not you have to engage a splunk architect to design it. Anyway, are you speaking of Packet capture or network switches ... See more...
Hi @gayathrc , I suppose that you already have your Splunk infrastrcuture, if not you have to engage a splunk architect to design it. Anyway, are you speaking of Packet capture or network switches logs? in the first case, you have to configure The Splunk App for Steam, for more datails see at  https://splunkbase.splunk.com/app/1809 https://splunkbase.splunk.com/app/5234 https://splunkbase.splunk.com/app/5238 If instead you have to use Swirches logs, you have to configure one of the component of your Splunk infrastructure (usually an Heavy Forwarder) as receiver of network inputs (for more infos see at https://docs.splunk.com/Documentation/Splunk/9.1.1/Data/Monitornetworkports). then you have to install the add-on related to your network technology (e.g. the Cisco Add-on for network technoogy https://splunkbase.splunk.com/app/1467) and then search for the fieds extracted. If you don't have the basic knoledge about Splunk searching, see the Splunk Search Tutorial (https://docs.splunk.com/Documentation/Splunk/latest/SearchTutorial/WelcometotheSearchTutorial). Ciao. Giuseppe