All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

folks, we had to do summary indexing of alerts created by savedsearches. This has been accomplished by logevent (Though its NOT well documented in splunk docs). I've used https://docs.splunk.com/Docu... See more...
folks, we had to do summary indexing of alerts created by savedsearches. This has been accomplished by logevent (Though its NOT well documented in splunk docs). I've used https://docs.splunk.com/Documentation/Splunk/8.2.2/RESTREF/RESTsearch to setup and the tokens are all working good. The settings are like below   logevent.param.index: test logevent.param.sourcetype: my_summary_index_st logevent.param.event: $name$ $result.*$   BUT , only the FIRST alert is captured by the $result.*$ token. Any idea how to ensure the entire events from the alert are captured?  (`$results.*$` is NOT working) PS: I've put a feedback to the docs team to update all the parameters, but the docs are lacking a lot compared to the alert functionalities
Hello, I want to add dependable Radio button Functionality for below example. When i click on 'TR DEPT' in Landscape View  then Filter radio buttons should be displayed with 'ALL' option being sele... See more...
Hello, I want to add dependable Radio button Functionality for below example. When i click on 'TR DEPT' in Landscape View  then Filter radio buttons should be displayed with 'ALL' option being selected by default. When i select 'TR Failed' option then Filter radio buttons should not be displayed and it should be hidden.   Could you please help me with code? Thank you
Hello, My splunk cluster have a alert like" The percentage of high priority searches skipped (21%) over the last 24 hours is very high and exceeded the red thresholds (10%) on this Splunk instance. To... See more...
Hello, My splunk cluster have a alert like" The percentage of high priority searches skipped (21%) over the last 24 hours is very high and exceeded the red thresholds (10%) on this Splunk instance. Total Searches that were part of this percentage=23. Total skipped Searches=5 The percentage of non high priority searches skipped (22%) over the last 24 hours is very high and exceeded the red thresholds (20%) on this Splunk instance. Total Searches that were part of this percentage=8835. Total skipped Searches=1947"。 What can I do,The Splunk stopped work now!! I have tried to Change the running time of rules and  to disperse them as much as possible,But still not work. Thanks, Jason
I enabled Splunkforwarder firewall settings in Linux, but now the 8080 port is not working. It shows below error. Search the web for jgusint Checking the connection Checking the proxy and the fir... See more...
I enabled Splunkforwarder firewall settings in Linux, but now the 8080 port is not working. It shows below error. Search the web for jgusint Checking the connection Checking the proxy and the firewall  
I have a quite sizeable environment on which I need to perform underlying Linux OS updates. There will most probably be kernel updates so I will need to reboot the machines in order to apply those. ... See more...
I have a quite sizeable environment on which I need to perform underlying Linux OS updates. There will most probably be kernel updates so I will need to reboot the machines in order to apply those. So I'm wondering what precautions do I have to take while restarting splunk components. To make situation more interesting, my cluster has replication factor of just one (don't ask). I have a indexer cluster as well as 3-member search-head cluster. I will not be doing any splunk software updates at the moment. It's at 8.1.something and for now it stays at this version. Any advice on the order of upgrades/restarts? If I understand correctly, I should be able to freely restart the master node, deployer and deployment server and it should not affect my environment. Should I do anything to search heads? I mean - will the captaincy migrate on its own if I choose to down a server being active captain or should I force the captaincy migration manually? Of course after reboot of every single machine I would wait for it to fully start and rejoin the SHC. In case of indexers, I'm fully aware that restarting single indexer with replication factor of just one will mean that during the downtime my results will be incomplete but are there any possible issues other than that? (I have 4 HF-s load-balancing across all my indexers so I understand that I should have no problem with event loss). I understand that I should do splunk offline before rebooting the machine, right? Anything else? Of course I will restart one machine at a time and will wait for it to fully restart, rejoin and so on, wherever applicable. Any other caveats?
Hi All,  I am confused as in why earliest time modifier is not working out in my case ? Someone pls clarify why is splunk displaying the results this way ? Below is my search    (index=linux sourc... See more...
Hi All,  I am confused as in why earliest time modifier is not working out in my case ? Someone pls clarify why is splunk displaying the results this way ? Below is my search    (index=linux source="/var/log/secure" "Failed password") earliest=-1d | stats latest(_time) as _time, values(host), values(dest_ip), values(source), values(src_ip), values(src), dc(src_ip) as srcIpCount, dc(src) as srcCount, values(user), dc(user) as userCount, count as failures by src_ip dest | rename values(*) as * | table _time dest dest_ip host source srcIpCount src_ip srcCount src userCount user failures | where failures>10 AND userCount>1   This does not show any results with earliest = -1d ( Today being Nov 2 2021) Now if i change earliest=-7d , then it shows results where _time = Nov 1 2021 which is yesterday.  Below screenshot.    Why didn't earliest = -1d or even -2d show these results ?  What is making -7d pick these up but not -1d or -2d ?  
Hi I have log like below need to extract "Send&Receive duration" and "send that has not respond".   this is send 2021-07-15 00:00:01,800 INFO CUST.InAB-ServerApp-1234567 [MyService] Packet Proc... See more...
Hi I have log like below need to extract "Send&Receive duration" and "send that has not respond".   this is send 2021-07-15 00:00:01,800 INFO CUST.InAB-ServerApp-1234567 [MyService] Packet Processed: A[50] B[0000211] this is send 2021-07-15 00:00:01,893 INFO ABCD.DaQW-ParityGQQ-1231234 [MyService] Packet Processed: A[60] B[0000465] this is send 2021-07-15 00:00:01,894 INFO MNBV.ZaQW-ChatCXZ-1478523 [MyService] Packet Processed: A[70] B[0000369] this is recieve 2021-07-15 00:00:11,719 INFO CUST.VqPO-Oracle7-9876543_CUST.InAB-ServerApp-1234567 [MyService] Normal Packet Received: A[55] B[0000211] this is recieve 2021-07-15 00:00:11,720 INFO EFGH.GaXZ-Carry2-3456789_ABCD.DaQW-ParityGQQ-1231234 [MyService] Normal Packet Received: A[65] B[0000456]         here is what happen: step1: find send id  CUST.InAB-ServerApp-1234567 ABCD.DaQW-ParityGQQ-1231234 MNBV.ZaQW-ChatCXZ-1478523   step2: find response id  CUST.VqPO-Oracle7-9876543_CUST.InAB-ServerApp-1234567  EFGH.GaXZ-Carry2-3456789_ABCD.DaQW-ParityGQQ-1231234   FYI: related events structure like this: Send: CUST.InAB-ServerApp-1234567 Recieve: CUST.VqPO-Oracle7-9876543_CUST.InAB-ServerApp-1234567     step3: check this condition  A+5 AND B=B to match related send receive. 2021-07-15 00:00:01,800 INFO CUST.InAB-ServerApp-1234567 [MyService] Packet Processed: A[50] B[0000211] 2021-07-15 00:00:11,719 INFO CUST.VqPO-Oracle7-9876543_CUST.InAB-ServerApp-1234567 [MyService] Normal Packet Received: A[55] B[0000211]     step4: finally show id that not have receive, and duration of each send&receive   expected Output: id                                                                                                                                                                                 status MNBV.ZaQW-ChatCXZ-1478523                                                                                                   no receive CUST.VqPO-Oracle7-9876543_CUST.InAB-ServerApp-1234567                               9,919 EFGH.GaXZ-Carry2-3456789_ABCD.DaQW-ParityGQQ-1231234                            9,826   Any idea? Thanks
I've got F5-LTM logs being sent as syslog in CEF format, I would like to break up the logs per-line with regex and I'm not making headway. Eventually I want to look like this for every event line ins... See more...
I've got F5-LTM logs being sent as syslog in CEF format, I would like to break up the logs per-line with regex and I'm not making headway. Eventually I want to look like this for every event line instead of a multi-line event.    2021-10-25T09:13:49+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Response|http_method=GET|site=CommonSWF_8080|x.x.x.x|x.x.x.x|dest_port=8080|uri_path=/|url=GET / HTTP/1.1|status=200|duration=12|http_user_agent=cosmos-v1|bytes_out=181 2021-10-25T09:13:49+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Request|http_method=GET|site=/CommonCommon_8080|x.x.x.x|x.x.x.x|dest_port=8080|uri_path=Common/submittedFormList.html|url=GET Common/submittedFormList.html HTTP/1.1|http_user_agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36 2021-10-25T09:13:49+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Request|http_method=GET|site=/CommonCommon_7003|x.x.x.x|x.x.x.x|dest_port=7003|uri_path=/|url=GET / HTTP/1.1|http_user_agent=cosmos-v1 2021-10-25T09:13:49+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Response|http_method=GET|site=/CommonCommon_7003|x.x.x.x|x.x.x.x|dest_port=7003|uri_path=/|url=GET / HTTP/1.1|status=403|duration=1|http_user_agent=cosmos-v1|bytes_out=1166 2021-10-25T09:13:49+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Request|http_method=POST|site=/CommonCommon_7020|x.x.x.x|x.x.x.x|dest_port=7020|uri_path=nerf/connection-application/connection-offer.jsf|url=POST nerf/connection-application/connection-offer.jsf HTTP/1.1|http_user_agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36 2021-10-25T09:13:50+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Response|http_method=POST|site=/CommonCommon_7020|x.x.x.x|x.x.x.x|dest_port=7020|uri_path=nerf/connection-application/connection-offer.jsf|url=POST nerf/connection-application/connection-offer.jsf HTTP/1.1|status=200|duration=136|http_user_agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36|bytes_out=26270 2021-10-25T09:13:50+11:00 10.100.5.4 CEF:0|vendor_product=F5-LTM|http_phase=Response|http_method=GET|site=/CommonCommon_8080|x.x.x.x|x.x.x.x|dest_port=8080|uri_path=Common/submittedFormList.html|url=GET Common/submittedFormList.html HTTP/1.1|status=200|duration=700|http_user_agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.54 Safari/537.36|bytes_out=330
  i want to suppress alert for next 4 hours i am trying to use throttle along with each result trigger condition splunk query result  has more than 10 result as output but i am getting on only one... See more...
  i want to suppress alert for next 4 hours i am trying to use throttle along with each result trigger condition splunk query result  has more than 10 result as output but i am getting on only one. so throttle is suppressing each result trigger condition which is not working as expected any thoughts on this Attaching screen shot.   
Hi, I've added a new row to an existing lookup file for testing the query and now I need to delete the last couple of rows to retain it to its original form. Is that possible, and if yes, please giv... See more...
Hi, I've added a new row to an existing lookup file for testing the query and now I need to delete the last couple of rows to retain it to its original form. Is that possible, and if yes, please give me a solution!!!   Thanks in advance!  
I just want to configure BREAK_ONLY_BEFORE. When I save the source type, it automatically adds LINE_BREAKER. I do not want the LINE_BREAKER to be there as it will remove the regex that I have specifi... See more...
I just want to configure BREAK_ONLY_BEFORE. When I save the source type, it automatically adds LINE_BREAKER. I do not want the LINE_BREAKER to be there as it will remove the regex that I have specified in BREAK_ONLY_BEFORE. I have done many things.  I want it to be like this.   But when I save it, Splunk automatically add the regex that I have specified for BREAK_ONLY_BEFORE as LINE_BREAKER. And the result is like this. Splunk remove the pg-2   What should I do to keep my regex not being removed by Splunk but I want it to split into another event?  
Hello guys... I'm new in Splunk and now I'm having troubles with the perfmon stanza. It looks like is not getting any data and I need to make some graphs about the cpu usage etc.. Any idea? This is ... See more...
Hello guys... I'm new in Splunk and now I'm having troubles with the perfmon stanza. It looks like is not getting any data and I need to make some graphs about the cpu usage etc.. Any idea? This is my input.conf By the way, can you give me some ideas about easy dashboards to deploy about windows local logs and performance for when I fix this problem? I read a lot of docs and forums but no ideas. It's my local windows so I'm just getting data from my computer if that means something... Love you. perfmon
 I have a field called "command" with the below input: C:\windows\systems32\cmd.exe /c ""c:\program Files(x86)\Microsoft. So I want to extract all the special characters and need to get the count o... See more...
 I have a field called "command" with the below input: C:\windows\systems32\cmd.exe /c ""c:\program Files(x86)\Microsoft. So I want to extract all the special characters and need to get the count of number of times a special character is used..  For example; I want my result to be like this : Special character    Count \                                       5 /                                       1 ""                                      1 (                                         1 )                                          1   How can I get this done?        
I am facing exact same issue as described at this forum post, however, in my case, the outcome of the same loadjob command is different on different search heads of the cluster. For e.g I have a das... See more...
I am facing exact same issue as described at this forum post, however, in my case, the outcome of the same loadjob command is different on different search heads of the cluster. For e.g I have a dashboard which is powered by searches which leverages | loadjob command to load results of a scheduled searches. If I access the dashboard on primary SH, it loads various panels except 2-3 panels. However, if I access the same dashboard from a different SH of the cluster, the same dashboard loads all the panels that didnt load on the primary SH but then has some panels that dont show data that were actually being loaded on the primary SH. E.g. if dashboard has 1-3, 4-6, 7-9 panels, panel 1-3 and 4-6 get loaded on the SH1 but 7-9 dont, however, on SH2, panel 1-3, 7-9,  get loaded but not 4-6. When I click on inspect job on the loadjob command, it says, no matching fields found, whereas, the same search runs on another SH.
I need the Universal Forwarders to send Windows Security Logs to two different indexers but the data I want to send has different criteria.   I need to send all win security events without a whiteli... See more...
I need the Universal Forwarders to send Windows Security Logs to two different indexers but the data I want to send has different criteria.   I need to send all win security events without a whitelist to Indexer1 and I need to send win security events with a whitelist to indexer2.   Indexer2 is in another country which will provide 24/7 SOC support and there's a bandwidth limitation.   Is this possible?   
Warning: Splunk noob question. I have a base search: source="Administrator_logs" name="An account failed to log on" Using  https://community.splunk.com/t5/Splunk-Search/Getting-Average-Number-of... See more...
Warning: Splunk noob question. I have a base search: source="Administrator_logs" name="An account failed to log on" Using  https://community.splunk.com/t5/Splunk-Search/Getting-Average-Number-of-Requests-Per-Hour/m-p/73506 I can calculate hourly averages: source="Administrator_logs"name="An account failed to log on" | eval reqs = 1 | timechart span=1h per_hour(reqs) as AvgReqPerHour What I would like to do is calculate a baseline. Having never done this before my thought is to calculate the hourly average and either standard deviation and/or some percentile, e.g. 90th, for all events as apposed to the last day/week/month although that would be interesting too. Eventually, this baseline calculation will be the basis for an alert, e.g. create alert if hourly count is outside 1 stddev or 90th percentile. Q1: How do I calculate the hourly average for all events? Q2: How do I calculate the hourly standard deviation for all events? Q3: How do I calculate the hourly 90th percentile for all events?
We have a requirement to configure splunk with the CA issued certificates. We are running Splunk 8.2.2.1. In test environment – two standalone splunk instances. In other environments –   cluster ... See more...
We have a requirement to configure splunk with the CA issued certificates. We are running Splunk 8.2.2.1. In test environment – two standalone splunk instances. In other environments –   cluster 3 node SH cluster  + SH deployer 3 node indexer cluster + CM License Master/Monitoring server Deployment Server Heavy forwarders   I tried to configure standalone server's Splunk web (8443) and splunkd (8089) using this new CA issued cert. But after I configured it for splunkd 8089 it breaks web, command line and also when I run openssl from other server it shows connected but then hangs and does not show certs. I came across following link but it was for splunk 6 and things has changed a lot since then. https://community.splunk.com/t5/Security/Custom-Certificate-for-Port-8089/m-p/362377 We also want to configure SH cluster to use CA issued cert for splunkd (8089). But I could not find doc for SH cluster. On standalone splunk instance: cat /opt/splunk/etc/system/local/web.conf [settings] httpport = 8443 enableSplunkWebSSL = 1 sslVersions = tls1.2 sslPassword = $7$1_encrypted_password_lzShn0euEM5Yi9m6pUPS38TkYu1lDDsg= serverCert = etc/auth/splunkweb/QA_Splunk_Concatenated.pem privKeyPath = etc/auth/splunkweb/QA_Splunk_PrivateKey.key cat /opt/splunk/etc/system/local/server.conf [general] serverName = xxx.test pass4SymmKey = $7$k_encryted_key== [sslConfig] #serverCert = server.pem sslPassword = $7$3_encryted_key== sslVersions = tls1.2 enableSplunkdSSL = true serverCert = /opt/splunk/etc/auth/splunkweb/QA_Splunk_Concatenated.pem #requireClientCert = false Is this correct? Also, Do I need to request separate cert for each SH member? Will this impact other communication between SH custer and indexer cluster, license master, monitoring console, SH deployer?
I am attempting to configure the TA-MS_O365_Reporting app but can't seem to get the permissions correct. I've configured a user account in Azure AD called "splunk" and from the Exchange Admin console... See more...
I am attempting to configure the TA-MS_O365_Reporting app but can't seem to get the permissions correct. I've configured a user account in Azure AD called "splunk" and from the Exchange Admin console assigned it to a custom role with the four required permissions: Message Tracking View-Only Audit Logs View-Only Configuration View-Only Recipients But when I enable the input and then check Splunk's internal logs I see the following error: 401 Client Error: Unauthorized for url: https://reports.office365.com/ecp/reportingwebservice/reporting.svc/MessageTrace?$filter=StartDate%20eq%20datetime'2021-10-27T20:08:33.583692Z'%20and%20EndDate%20eq%20datetime'2021-10-27T21:08:33.583692Z Is there something I am missing with regards to setting up the permissions? Thank you.
My base search is trying to show the amount of GB left from servers that I have deployed the windows add-on for splunk to. (Not final - just trying to get it to work. Base search below) source="perf... See more...
My base search is trying to show the amount of GB left from servers that I have deployed the windows add-on for splunk to. (Not final - just trying to get it to work. Base search below) source="perfmonmk:logicaldisk" earliest=-7d | eval gb_free=Free_Megabytes/1024 | timechart span=1d max(gb_free) AS GB_Free I want to use a chain search like "host=this_server" and "instance=C:" to target specifc servers and drives to show how much space is on each left in a chart. However, i run into many problems when trying to use chain search to narrow the base search's results down. If anyone has any ideas as to how to can search for specific fields from a base search without error it would help a lot. The specific error is "unknown search command" from the chain search
Hello,  I'd like to create a search for a multiple alerts on the same host. The idea would be to get results for each host that would see more than 10 malicious files alerts within let's say las... See more...
Hello,  I'd like to create a search for a multiple alerts on the same host. The idea would be to get results for each host that would see more than 10 malicious files alerts within let's say last 72hours from now. I tried something like this:  index=xyz sourcetype=xyz:123  | bin span=1d createdDate | eval createdDate_epoch=strptime(createdDate,"%Y-%m-%d") | eval today_epoch=now() `comment("#### ####")` | eval days_lapsed=round((today_epoch - createdDate_epoch)/86400,0) | where days_lapsed <=3 | stats earliest(createdDate) as createdDate ```values(file_name) as file_name values(filePath) as filePath values(agentComputerName) as agentComputerName``` values(category) as category values(siteName) as siteName values(file_hash) as file_hash values(signature) as signature dc(file_name) as number_of_alerts max(days_lapsed) as days_lapsed by agentComputerName | where number_of_alerts >4 In my case that gives me that there were 5 alerts for malicious files (some of the files occurred more than once in these 3 days) And all would be great if not the fact that I have only one date (createdDate), where I'd like to see all dates per each file and when the file was created.  How do I need to modify my search to get where I need to be? Thank you!