All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a dashboard that I have to do a lot of refreshing on. This is causing a lot of jobs to happen on my SPunk install. The first set of jobs run but they are kept by Splunk for 5 minutes (Image ... See more...
I have a dashboard that I have to do a lot of refreshing on. This is causing a lot of jobs to happen on my SPunk install. The first set of jobs run but they are kept by Splunk for 5 minutes (Image below) How do I get the expired to go to 5 seconds? I was trying some setting but they are not working for me. Any ideas woud be great thanks default_save_ttl=5 ttl = 5 remote_ttl = 5 srtemp_dir_ttl = 5 cache_ttl=5
We have Spunk Ent. & ES both on Windows & RHEL (Linux). Are there much different procedures for Win vs Linux? Should I be writing on for each ? Or just one procedures for our entire environment. Some... See more...
We have Spunk Ent. & ES both on Windows & RHEL (Linux). Are there much different procedures for Win vs Linux? Should I be writing on for each ? Or just one procedures for our entire environment. Some of my UFs are as old as 7.2.9 all the way up to 8.0.7. Thanks a million.
Hi All,   I'm using network toolkit's external lookup ping for monitoring server down in my environment, but after increasing the packet count to 4 instead of default 1 I'm getting 3-4 mins of inde... See more...
Hi All,   I'm using network toolkit's external lookup ping for monitoring server down in my environment, but after increasing the packet count to 4 instead of default 1 I'm getting 3-4 mins of indexing delay in data. Any Ideas how can I reduce this delay as the script is taking only 30sec of time to run.
Hi All, Could you please help me. scenario :- i want a result where one field contains a specific value, but in result I am getting all the value along with my specific value. example- index=xx por... See more...
Hi All, Could you please help me. scenario :- i want a result where one field contains a specific value, but in result I am getting all the value along with my specific value. example- index=xx port="10" Or port="110" |  output is like :- hostname port 23 25 110 80 443   i want to show only port 10 or 110 only, not all the ports are open for the host.   Note: "AND" will not work as it will only search those host which are having only 10 or 110 ports open, 
Hi Guys,   I have this issue on my splunk hf on vm redhat on azure I installed the aws add on but when I try to configure I have this view. someone can help? Regards Alessandro
When searching to see which sourcetypes are in the Endpoint data model, I am getting different results if I search: | tstats `summariesonly` c as count from datamodel="Endpoint" by index, sourcetype... See more...
When searching to see which sourcetypes are in the Endpoint data model, I am getting different results if I search: | tstats `summariesonly` c as count from datamodel="Endpoint" by index, sourcetype than when I search: | tstats `summariesonly` c as count from datamodel="Endpoint.Processes" by index, sourcetype Why wouldn't the sourcetypes under the Processes data set be included in the first search for sourcetypes in the Endpoint data model? Thanks.
Hi! Consider the following kpi base search monitoring the windows service state:     index=wineventlog sourcetype="WinEventLog:System" SourceName="Microsoft-Windows-Service Control Manager" | rex... See more...
Hi! Consider the following kpi base search monitoring the windows service state:     index=wineventlog sourcetype="WinEventLog:System" SourceName="Microsoft-Windows-Service Control Manager" | rex field=Message "(The) (?<ServiceName>.+) (service entered the) (?<ServiceState>.+) " | eval ServiceState=case(ServiceState=="running",2,ServiceState=="stopped",0,1==1,1)      If I do not want to explicitly name the windows service in the base search how do I include the service name, here ServiceName, beside the entity_title=host in the later created ITSI episode. Why? From the created episode we run a recovery action to restart a windows service when stopped. For this we need to know the service name and the host it is running on. What we need is the entity_title=host and the whatsoever=ServiceName as dedicated fields available in the correlation search from this generic kpi base search. Performing an ITOA rest call is no problem. Note: If I split by ServiceName then the service name becomes the entity_title and then the host is missing. Maybe one having an idea which does help us. We just want to avoid creating one KPI per Windows Service. Cheers Peter
How to identify important metrics to create a dashboard.
Trying to create a new Splunk Cloud Platform for a newly created account. Seeing the error:   An internal error was detected when creating the stack. We're sorry, an internal error was detected wh... See more...
Trying to create a new Splunk Cloud Platform for a newly created account. Seeing the error:   An internal error was detected when creating the stack. We're sorry, an internal error was detected when creating the stack. Please try again later.
We are getting the below error while running the commands under bin directory.  Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied xxxx bin]# ./splunk restart Pid fil... See more...
We are getting the below error while running the commands under bin directory.  Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied xxxx bin]# ./splunk restart Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied splunkd.pid file is unreadable. [FAILED] Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied Splunk> Be an IT superhero. Go home early. Checking prerequisites... Checking http port [8000]: not available ERROR: http port [8000] - port is already bound. Splunk needs to use this port.   I have tried the below steps  Solved: Splunk will not start and is waiting for config lo... - Splunk Community    [root@xxxxxx bin]# ./splunk clean locks Pid file "/opt/splunk/var/run/splunk/splunkd.pid" unreadable.: Permission denied   location :  /opt/splunk/var/run/splunk -rw-r-----. 1 root root 48 Nov 2 15:00 splunkd.pid  
folks, we had to do summary indexing of alerts created by savedsearches. This has been accomplished by logevent (Though its NOT well documented in splunk docs). I've used https://docs.splunk.com/Docu... See more...
folks, we had to do summary indexing of alerts created by savedsearches. This has been accomplished by logevent (Though its NOT well documented in splunk docs). I've used https://docs.splunk.com/Documentation/Splunk/8.2.2/RESTREF/RESTsearch to setup and the tokens are all working good. The settings are like below   logevent.param.index: test logevent.param.sourcetype: my_summary_index_st logevent.param.event: $name$ $result.*$   BUT , only the FIRST alert is captured by the $result.*$ token. Any idea how to ensure the entire events from the alert are captured?  (`$results.*$` is NOT working) PS: I've put a feedback to the docs team to update all the parameters, but the docs are lacking a lot compared to the alert functionalities
Hello, I want to add dependable Radio button Functionality for below example. When i click on 'TR DEPT' in Landscape View  then Filter radio buttons should be displayed with 'ALL' option being sele... See more...
Hello, I want to add dependable Radio button Functionality for below example. When i click on 'TR DEPT' in Landscape View  then Filter radio buttons should be displayed with 'ALL' option being selected by default. When i select 'TR Failed' option then Filter radio buttons should not be displayed and it should be hidden.   Could you please help me with code? Thank you
Hello, My splunk cluster have a alert like" The percentage of high priority searches skipped (21%) over the last 24 hours is very high and exceeded the red thresholds (10%) on this Splunk instance. To... See more...
Hello, My splunk cluster have a alert like" The percentage of high priority searches skipped (21%) over the last 24 hours is very high and exceeded the red thresholds (10%) on this Splunk instance. Total Searches that were part of this percentage=23. Total skipped Searches=5 The percentage of non high priority searches skipped (22%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=8835. Total skipped Searches=1947"。 What can I do,The Splunk stopped work now!! I have tried to Change the running time of rules and  to disperse them as much as possible,But still not work. Thanks, Jason
I enabled Splunkforwarder firewall settings in Linux, but now the 8080 port is not working. It shows below error. Search the web for jgusint Checking the connection Checking the proxy and the fir... See more...
I enabled Splunkforwarder firewall settings in Linux, but now the 8080 port is not working. It shows below error. Search the web for jgusint Checking the connection Checking the proxy and the firewall  
I have a quite sizeable environment on which I need to perform underlying Linux OS updates. There will most probably be kernel updates so I will need to reboot the machines in order to apply those. ... See more...
I have a quite sizeable environment on which I need to perform underlying Linux OS updates. There will most probably be kernel updates so I will need to reboot the machines in order to apply those. So I'm wondering what precautions do I have to take while restarting splunk components. To make situation more interesting, my cluster has replication factor of just one (don't ask). I have a indexer cluster as well as 3-member search-head cluster. I will not be doing any splunk software updates at the moment. It's at 8.1.something and for now it stays at this version. Any advice on the order of upgrades/restarts? If I understand correctly, I should be able to freely restart the master node, deployer and deployment server and it should not affect my environment. Should I do anything to search heads? I mean - will the captaincy migrate on its own if I choose to down a server being active captain or should I force the captaincy migration manually? Of course after reboot of every single machine I would wait for it to fully start and rejoin the SHC. In case of indexers, I'm fully aware that restarting single indexer with replication factor of just one will mean that during the downtime my results will be incomplete but are there any possible issues other than that? (I have 4 HF-s load-balancing across all my indexers so I understand that I should have no problem with event loss). I understand that I should do splunk offline before rebooting the machine, right? Anything else? Of course I will restart one machine at a time and will wait for it to fully restart, rejoin and so on, wherever applicable. Any other caveats?
Hi All,  I am confused as in why earliest time modifier is not working out in my case ? Someone pls clarify why is splunk displaying the results this way ? Below is my search    (index=linux sourc... See more...
Hi All,  I am confused as in why earliest time modifier is not working out in my case ? Someone pls clarify why is splunk displaying the results this way ? Below is my search    (index=linux source="/var/log/secure" "Failed password") earliest=-1d | stats latest(_time) as _time, values(host), values(dest_ip), values(source), values(src_ip), values(src), dc(src_ip) as srcIpCount, dc(src) as srcCount, values(user), dc(user) as userCount, count as failures by src_ip dest | rename values(*) as * | table _time dest dest_ip host source srcIpCount src_ip srcCount src userCount user failures | where failures>10 AND userCount>1   This does not show any results with earliest = -1d ( Today being Nov 2 2021) Now if i change earliest=-7d , then it shows results where _time = Nov 1 2021 which is yesterday.  Below screenshot.    Why didn't earliest = -1d or even -2d show these results ?  What is making -7d pick these up but not -1d or -2d ?  
Hi I have log like below need to extract "Send&Receive duration" and "send that has not respond".   this is send 2021-07-15 00:00:01,800 INFO CUST.InAB-ServerApp-1234567 [MyService] Packet Proc... See more...
Hi I have log like below need to extract "Send&Receive duration" and "send that has not respond".   this is send 2021-07-15 00:00:01,800 INFO CUST.InAB-ServerApp-1234567 [MyService] Packet Processed: A[50] B[0000211] this is send 2021-07-15 00:00:01,893 INFO ABCD.DaQW-ParityGQQ-1231234 [MyService] Packet Processed: A[60] B[0000465] this is send 2021-07-15 00:00:01,894 INFO MNBV.ZaQW-ChatCXZ-1478523 [MyService] Packet Processed: A[70] B[0000369] this is recieve 2021-07-15 00:00:11,719 INFO CUST.VqPO-Oracle7-9876543_CUST.InAB-ServerApp-1234567 [MyService] Normal Packet Received: A[55] B[0000211] this is recieve 2021-07-15 00:00:11,720 INFO EFGH.GaXZ-Carry2-3456789_ABCD.DaQW-ParityGQQ-1231234 [MyService] Normal Packet Received: A[65] B[0000456]         here is what happen: step1: find send id  CUST.InAB-ServerApp-1234567 ABCD.DaQW-ParityGQQ-1231234 MNBV.ZaQW-ChatCXZ-1478523   step2: find response id  CUST.VqPO-Oracle7-9876543_CUST.InAB-ServerApp-1234567  EFGH.GaXZ-Carry2-3456789_ABCD.DaQW-ParityGQQ-1231234   FYI: related events structure like this: Send: CUST.InAB-ServerApp-1234567 Recieve: CUST.VqPO-Oracle7-9876543_CUST.InAB-ServerApp-1234567     step3: check this condition  A+5 AND B=B to match related send receive. 2021-07-15 00:00:01,800 INFO CUST.InAB-ServerApp-1234567 [MyService] Packet Processed: A[50] B[0000211] 2021-07-15 00:00:11,719 INFO CUST.VqPO-Oracle7-9876543_CUST.InAB-ServerApp-1234567 [MyService] Normal Packet Received: A[55] B[0000211]     step4: finally show id that not have receive, and duration of each send&receive   expected Output: id                                                                                                                                                                                 status MNBV.ZaQW-ChatCXZ-1478523                                                                                                   no receive CUST.VqPO-Oracle7-9876543_CUST.InAB-ServerApp-1234567                               9,919 EFGH.GaXZ-Carry2-3456789_ABCD.DaQW-ParityGQQ-1231234                            9,826   Any idea? Thanks
I've got F5-LTM logs being sent as syslog in CEF format, I would like to break up the logs per-line with regex and I'm not making headway. Eventually I want to look like this for every event line ins... See more...
I've got F5-LTM logs being sent as syslog in CEF format, I would like to break up the logs per-line with regex and I'm not making headway. Eventually I want to look like this for every event line instead of a multi-line event.    2021-10-25T09:13:49+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Response|http_method=GET|site=CommonSWF_8080|x.x.x.x|x.x.x.x|dest_port=8080|uri_path=/|url=GET / HTTP/1.1|status=200|duration=12|http_user_agent=cosmos-v1|bytes_out=181 2021-10-25T09:13:49+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Request|http_method=GET|site=/CommonCommon_8080|x.x.x.x|x.x.x.x|dest_port=8080|uri_path=Common/submittedFormList.html|url=GET Common/submittedFormList.html HTTP/1.1|http_user_agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36 2021-10-25T09:13:49+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Request|http_method=GET|site=/CommonCommon_7003|x.x.x.x|x.x.x.x|dest_port=7003|uri_path=/|url=GET / HTTP/1.1|http_user_agent=cosmos-v1 2021-10-25T09:13:49+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Response|http_method=GET|site=/CommonCommon_7003|x.x.x.x|x.x.x.x|dest_port=7003|uri_path=/|url=GET / HTTP/1.1|status=403|duration=1|http_user_agent=cosmos-v1|bytes_out=1166 2021-10-25T09:13:49+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Request|http_method=POST|site=/CommonCommon_7020|x.x.x.x|x.x.x.x|dest_port=7020|uri_path=nerf/connection-application/connection-offer.jsf|url=POST nerf/connection-application/connection-offer.jsf HTTP/1.1|http_user_agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36 2021-10-25T09:13:50+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Response|http_method=POST|site=/CommonCommon_7020|x.x.x.x|x.x.x.x|dest_port=7020|uri_path=nerf/connection-application/connection-offer.jsf|url=POST nerf/connection-application/connection-offer.jsf HTTP/1.1|status=200|duration=136|http_user_agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36|bytes_out=26270 2021-10-25T09:13:50+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Response|http_method=GET|site=/CommonCommon_8080|x.x.x.x|x.x.x.x|dest_port=8080|uri_path=Common/submittedFormList.html|url=GET Common/submittedFormList.html HTTP/1.1|status=200|duration=700|http_user_agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36|bytes_out=330
  i want to suppress alert for next 4 hours i am trying to use throttle along with each result trigger condition splunk query result  has more than 10 result as output but i am getting on only one... See more...
  i want to suppress alert for next 4 hours i am trying to use throttle along with each result trigger condition splunk query result  has more than 10 result as output but i am getting on only one. so throttle is suppressing each result trigger condition which is not working as expected any thoughts on this Attaching screen shot.   
Hi, I've added a new row to an existing lookup file for testing the query and now I need to delete the last couple of rows to retain it to its original form. Is that possible, and if yes, please giv... See more...
Hi, I've added a new row to an existing lookup file for testing the query and now I need to delete the last couple of rows to retain it to its original form. Is that possible, and if yes, please give me a solution!!!   Thanks in advance!