All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I have deployed a search head cluster with 3 members and one deployer. based on splunk document, is recommends that run a third-party hardware or software load balancer in front of the clustered... See more...
Hi, I have deployed a search head cluster with 3 members and one deployer. based on splunk document, is recommends that run a third-party hardware or software load balancer in front of the clustered search heads. does splunk recommend any special load balancer that is most compatible
Hi, I am not able to login in any of the server like (CM, SH and more...). While am putting the username and passwords its showing . "Login Failed " What could be the reason , how can i troubles... See more...
Hi, I am not able to login in any of the server like (CM, SH and more...). While am putting the username and passwords its showing . "Login Failed " What could be the reason , how can i troubleshoot from backend 
Hi at all, a customer asked to me if it's possible to have an alias instead of the hostname in the Monitoring Console Dashboards. I know that's it's easy to do this in normal Splunk Searches, but i... See more...
Hi at all, a customer asked to me if it's possible to have an alias instead of the hostname in the Monitoring Console Dashboards. I know that's it's easy to do this in normal Splunk Searches, but in the Monitoring Console Dashboards (e.g. Summary or Overview or Instances)? Ciao. Giuseppe
Can i run appdynamics PHP agent on Alpine Docker image ? 
I can see logs from Cisco ASA firewall to Splunk and we are getting logs when a connection close. It have the total data send with bytes.    Nov 1 12:19:48 ASA-FW-01 : %ASA-6-302014: Teardown TCP c... See more...
I can see logs from Cisco ASA firewall to Splunk and we are getting logs when a connection close. It have the total data send with bytes.    Nov 1 12:19:48 ASA-FW-01 : %ASA-6-302014: Teardown TCP connection 4043630532 for INSIDE-339:192.168.42.10/37308 to OUTSIDE-340:192.168.36.26/8080 duration 0:00:00 bytes 6398 TCP FINs from INSIDE-VLAN339   I am unable to see bytes as a valid field.  I tried to create Extract New Fields for this.    ^(?:[^:\n]*:){8}\d+\s+(?P<BYTES>\w+\s+)   But when I use in the search it fails.    index=asa_* src_ip = "192.168.42.10" | rex field=_raw DATA=0 "^(?:[^:\n]*:){8}\d+\s+(?P<BYTES>\w+\s+)"     OBJECTIVE :  Calculate Server throughput for flows using Cisco ASA logs.   So view the network throughput for the flows using splunk. 
Hi there: I have the following makeresults query: | makeresults count=3 | eval source="abc" | eval msg="consumed" | eval time_1="2023-11-09T21:33:05Z" | eval time_2="2023-11-09T21:40:05Z" ... See more...
Hi there: I have the following makeresults query: | makeresults count=3 | eval source="abc" | eval msg="consumed" | eval time_1="2023-11-09T21:33:05Z" | eval time_2="2023-11-09T21:40:05Z" So i want to create three different events where the values for time_1 & time_2 are different for each event. How can i do that? Thanks!
Hello, How to filter all row if some fields are empty, but do not filter if one of the field has value?    I appreciate your help. Thank you I want to filter out row, if vuln, score and company ... See more...
Hello, How to filter all row if some fields are empty, but do not filter if one of the field has value?    I appreciate your help. Thank you I want to filter out row, if vuln, score and company fields are empty/NULL    (All 3 fields are empty: Row 2 and 6 in the table below) If vuln OR company fields have values(NOT EMPTY), do not filter  Row 4: vuln=Empty                            company=company D(NOT empty) Row 9: vuln=vuln9(NOT empty)    company=empty If I use the search below, it will filter out row with vuln OR company that are empty (Row 4 and Row 9) index=testindex  vuln=* AND score=* AND company=* Current data no ip vuln score company 1 1.1.1.1 vuln1 9 company A 2 1.1.1.2       3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 6 1.1.1.6       7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9     10 1.1.1.10 vuln10 4 company J   Expected Result: ***NEED CORRECTION*** no ip vuln score company 1 1.1.1.1 vuln1 9 company A 2 FILTERED FILTERED FILTERED FILTERED 3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 6 FILTERED FILTERED FILTERED FILTERED 7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9     10 1.1.1.10 vuln10 4 company J Sorry, This is what I mean by FILTERED no ip vuln score company 1 1.1.1.1 vuln1 9 company A 3 1.1.1.3 vuln3 9 company C 4 1.1.1.4     company D 5 1.1.1.5 vuln5 7 company E 7 1.1.1.7 vuln7 5 company G 8 1.1.1.8 vuln8 5 company H 9 1.1.1.9 vuln9    
Looking help to remove outliers (values greater than 90 percentile responses). For Ex:  Response Time  -------------------- 1 Second 2 Seconds 3 Seconds 4 Seconds 5 Seconds 6 Seconds ... See more...
Looking help to remove outliers (values greater than 90 percentile responses). For Ex:  Response Time  -------------------- 1 Second 2 Seconds 3 Seconds 4 Seconds 5 Seconds 6 Seconds 7 Seconds 8 Seconds 9 Seconds 10 Seconds 90 percentile for the above values is 9 Seconds. want to remove the outlier 10 Seconds and get the average response for remaining values. My expected Avg Response (after Removing the outlier) = 5 Seconds ==================================================== My Query is  index="dynatrace" sourcetype="dynatrace:usersession" | spath output=user_actions path="userActions{}" | stats count by user_actions | spath output=pp_user_action_application input=user_actions path=application | where pp_user_action_application="******" | spath output=User_Action_Name input=user_actions path=name | spath output=pp_user_action_response input=user_actions path=visuallyCompleteTime | eval User_Action_Name=substr(User_Action_Name,0,150) | eventstats avg(pp_user_action_response) AS "Avg_Response" by Proper_User_Action | stats count(pp_user_action_response) As "Total_Calls",perc90(pp_user_action_response) AS "Perc90_Response" by User_Action_Name Avg_Response | eval Perc90_Response=round(Perc90_Response,0)/1000 | eval Avg_Response=round(Avg_Response,0)/1000 | table Proper_User_Action,Total_Calls,Perc90_Response
I have created an app for a team that I work with, and have set up mapping from our SAML auth so that the people on the team get a role that has access to the app. I would like for when these folks ... See more...
I have created an app for a team that I work with, and have set up mapping from our SAML auth so that the people on the team get a role that has access to the app. I would like for when these folks log in (they only have this one role, no other roles -- not even the default user role), they would land on the home page for the app.  As I understand it, that's supposed to be accomplished with the default_namespace parameter, set in the $SPLUNK_HOME/etc/apps/user-prefs/local/user-prefs.conf. In a regular browser window, now, when they log in, they get a 404 page for the app's home page (en-US/app/<appname>/search).  If they do it in an incognito/private browsing window, they land on the Launcher app and then then can navigate to the app and it works just fine.  The app's home page exists and is absolutely NOT a 404; after logging in in incognito, the URL they get when they manually navigate to the app is identical to the the link they're landed on when logging in without incognito.  (Ideally, I don't want these users to have access to the Launcher app, even.  But for now, they have to, in order to work around this.) We have a distributed environment (multiple indexers, multiple load-balanced search heads with a VIP).  This is the first time I've worked in a distributed environment.  So I'm assuming it's something to do with that. Any tips on what I'm doing wrong?
Hi, When using jdk8+ javaagent version 22.12.0,  I see below error $ java -javaagent:/cache/javaagent.jar -version Unable to locate appagent version to use - Java agent disabled openjdk version ... See more...
Hi, When using jdk8+ javaagent version 22.12.0,  I see below error $ java -javaagent:/cache/javaagent.jar -version Unable to locate appagent version to use - Java agent disabled openjdk version "1.8.0_382" OpenJDK Runtime Environment (Zulu 8.72.0.17-CA-linux64) (build 1.8.0_382-b05) OpenJDK 64-Bit Server VM (Zulu 8.72.0.17-CA-linux64) (build 25.382-b05, mixed mode)   What is the compatible javaagent version for above Java version.
Example logs 2022-08-19 08:10:53.0593|**Starting** 2022-08-19 08:10:53.5905|fff 2022-08-19 08:10:53.6061|dd 2022-08-19 08:10:53.6218|Shutting down 2022-08-19 08:10:53.6218|**Starting** 2022-08-... See more...
Example logs 2022-08-19 08:10:53.0593|**Starting** 2022-08-19 08:10:53.5905|fff 2022-08-19 08:10:53.6061|dd 2022-08-19 08:10:53.6218|Shutting down 2022-08-19 08:10:53.6218|**Starting** 2022-08-19 08:10:53.6374|fffff 2022-08-19 08:10:53.6686|ddd 2022-08-19 08:10:53.6843|**Starting** 2022-08-19 08:10:54.1530|aa 2022-08-19 08:10:54.1530|vv   From this I have created three columns Devicenumber,  _time ,Description If ** Starting ** message has followed by "Shutting down" mean, it should classify as good and if Starting message has not Shutting down mean, it should classify as bad.   From the above example, there should be 2 bad and one good.   If there is only one row which has only Starting and no shutting down recorded, in that case also , it should classify as bad
I'm trying to run a lookup against a list of values in an array.  I have a CSV which look as follows: id x y 123 Data Data2 321 Data Data2 456 Data3 Data3   The field from t... See more...
I'm trying to run a lookup against a list of values in an array.  I have a CSV which look as follows: id x y 123 Data Data2 321 Data Data2 456 Data3 Data3   The field from the search is is an array which looks as follows: ["123", "321", 456"] I want to map the lookup value.  Do I need to iterate over the field or can I use a lookup or is the best option?
I have a working query that uses Transaction to find the Starting / Ending log event.  I am trying to make some changes but Transaction is not working as I expected. In my current working example I... See more...
I have a working query that uses Transaction to find the Starting / Ending log event.  I am trying to make some changes but Transaction is not working as I expected. In my current working example I am looking for a 'job name' and then the starting and ending log event. In my current code I am using one query: index=anIndex sourcetype=aSourcetype aJobName AND ("START of script" OR "COMPLETED OK"). This works fine when there are no issues but now if a job fails there will be multiple "START of script" and only one 'COMPLETED OK' event. So, I tried reworking my query to be as follows to only get the most recent of either the start / completed log event. index=anIndex sourcetype=aSourcetype aJobName AND "START of script" | head 1 | append [ index=anIndex sourcetype=aSourcetype aJobName AND "COMPLETED OK" | head 1 ] But when I get to the part of creating a transaction the transaction only has the Starting log event ? | rex "(?<event_name>(START of script)|(COMPLETED OK))" | eval event_name=CASE(event_name="START of script", "script_start", event_name="COMPLETED OK", "script_complete") | eval event_time=strftime(_time, "%Y-%m-%d %H:%M:%S") | eval {event_name}_time=_time | rex field=_raw "Batch::(?<batchJobName>[^\s]*)" | transaction keeporphans=true host batchJobName startswith=(event_name="script_start") endswith=(event_name="script_complete")   Is the use of | append [...] the cause ? If append cannot be used for transaction what other way can I get the data Im looking for ?
Hi , How we can fix this issue in ES SH "Health Check: msg="A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install... See more...
Hi , How we can fix this issue in ES SH "Health Check: msg="A script exited abnormally with exit status: 1" input=".$SPLUNK_HOME/etc/apps/splunk-dashboard-studio/bin/save_image_and_icon_on_install.py" stanza="default" Thanks..
I'm having some trouble coming up with the SPL for the following situation: I have some series of events with a timestamp. These events have a field extracted with a value of either "YES" or "NO". W... See more...
I'm having some trouble coming up with the SPL for the following situation: I have some series of events with a timestamp. These events have a field extracted with a value of either "YES" or "NO". When sorted by _time we end up with a list like the following: _time Result time1 YES time2 NO time3 NO time4 YES   I'd like to count the duration between the "NO" values and the next "YES" value. So in this case we'd have a duration equal to time4 - time2.    index=* sourcetype=*mantec* "Computer name" = raspberry_pi06 "Risk name" = WS.Reputation.1 | sort _time | eval removed = if('Actual action' == "Quarantined", "YES", "NO") | streamstats reset_before="("removed==\"YES\"")" last(_time) as lastTime first(_time) as firstTime count BY removed | eval duration = round((lastTime - firstTime)/60,0) | table removed duration count _time     I've tried to lean on streamstats but the result is resetting the count at the last "NO" and doesn't count the time of the next "YES". We end up with a duration equal to time3 - time2. Also in the case of a single "NO" followed by a "YES" we get a duration of 0 which is also incorrect. I feel like I'm missing something extremely obvious.
Hi Folks, I am trying to figure out how to compare a single field based off another field called timestamp. I pull in data into Splunk via a JSON file that looks like the following: {"table": "Rou... See more...
Hi Folks, I am trying to figure out how to compare a single field based off another field called timestamp. I pull in data into Splunk via a JSON file that looks like the following: {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.240.0.0/30"} {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.241.0.0/30"} {"table": "Route", "timestamp": "2023-11-07T12:25:43.208903", "dst": "10.242.0.0/30"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.240.0.0/30"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.241.0.0/31"} {"table": "Route", "timestamp": "2023-11-10T13:12:17.529455", "dst": "10.245.0.0/30"} There will be tens or hundreds of unique dst values, all with the same timestamp value. What I'd like to be able to do is compare all dst values based off the timestamp value and compare that against a different set of dst values based off a different timestamp value. So far, I've been able to do an appendcols + simple eval function to compare stats values from one timestamp to another: index=<index> host=<host> sourcetype=_json timestamp=2023-11-07T12:25:43.208903 | stats values(dst) as old_prefix | appendcols [searchindex=<index> host=<host> sourcetype=_json timestamp=2023-11-10T13:12:17.529455 | stats values(dst) as new_prefix] | eval result=if(old_prefix=new_prefix, "pass","fail") | table old_prefix new_prefix result  And these are the results I get: old_prefix new_prefix result 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 10.240.0.0/30 10.241.0.0/31 10.245.0.0/30 fail   But what I'd really want to see is something along the lines of this: old_prefix new_prefix result present_in_old_table present_in_new_table 10.240.0.0/30 10.240.0.0/30 pass     10.241.0.0/30   fail 10.241.0.0/30     10.241.0.0/31 fail   10.241.0.0/31 10.242.0.0/30   fail 10.242.0.0/30     10.245.0.0/30 fail    10.245.0.0/30   Or this: old_prefix new_prefix result present_in_old_table present_in_new_table 10.240.0.0/30 10.241.0.0/30 10.242.0.0/30 10.240.0.0/30 10.241.0.0/31 10.245.0.0/30 fail 10.241.0.0/30 10.242.0.0/30 10.241.0.0/31 10.245.0.0/30   Is this something that could be reasonably done inside splunk? Please let me know if you have any further questions from me.
Hi i am trying to build a dashboard and I require a query to execute below some searches below:  1. REPORT FALSE POSITIVE PER TOTAL  2. REPORT MONTHLY SPLUNK ALERT HIGH - MEDIUM - LOW Can anyon... See more...
Hi i am trying to build a dashboard and I require a query to execute below some searches below:  1. REPORT FALSE POSITIVE PER TOTAL  2. REPORT MONTHLY SPLUNK ALERT HIGH - MEDIUM - LOW Can anyone help me in building the same?
Hi, I need some help in creating a table from the below json events. Can someone please help me on that? The table columns be like 'Name' and 'Count' and Name should hold "cruice", "crpice" etc. an... See more...
Hi, I need some help in creating a table from the below json events. Can someone please help me on that? The table columns be like 'Name' and 'Count' and Name should hold "cruice", "crpice" etc. and Count should have the corresponding values. Any help would be appreciated. Thanks   11/7/23 9:04:23.616 PM   "Year": { host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "Top30RequesterInOneYear": { host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "cruice": 2289449, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "crpice": 1465846, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "zathena": 1017289, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app   11/7/23 9:04:23.616 PM   "qrecon": 864252, host = iapp6373.howard.ms.com source = /tmp/usage_snapshot.json sourcetype = tsproid_prod.db2ts_log_generator:app                                                                    
i dont know why splunk does not distribute clear instructions or tools to install and configure linux properly. redhat 9.x does not have init.d so you need to set boot-start with managed =1, but the... See more...
i dont know why splunk does not distribute clear instructions or tools to install and configure linux properly. redhat 9.x does not have init.d so you need to set boot-start with managed =1, but the service even if installed needs also systemctl ENABLE SplunkForwarder.service. In redhat 8 this is not the case.   the latest forwarder 9.1.1 also wont setup properly if you don't use user-seed.conf    I came out with this which does it job somehow, would be nice if someone would add his ideas to make it better.   (im running splunk as root for testing perpouses)         #!/bin/bash SPLUNK_FILE="splunkforwarder-9.1.1-64e843ea36b1.x86_64.rpm" rpm -ivh splunkforwarder-9.1.1-64e843ea36b1.x86_64.rpm ##change permission to root chown -R root:root /opt/splunkforwarder ##create user-seed.conf file that Splunk accepts to set admin credentials without user interaction sudo touch /opt/splunkforwarder/etc/system/local/user-seed.conf ##pass Splunk admin credentials into file sudo cat <<EOF > /opt/splunkforwarder/etc/system/local/user-seed.conf [user_info] USERNAME = admin PASSWORD = changeme EOF ##configure splunk /opt/splunkforwarder/bin/splunk set deploy-poll 192.168.68.129:8089 --accept-license --answer-yes --auto-ports --no-prompt /opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 0 /opt/splunkforwarder/bin/splunk start --no-prompt --answer-yes ##configure splunk Redhat 9.x #/opt/splunkforwarder/bin/splunk set deploy-poll 192.168.68.129:8089 --accept-license --answer-yes --auto-ports --no-prompt #/opt/splunkforwarder/bin/splunk enable boot-start -systemd-managed 1 #systemctl enable SplunkForwarder.service #systemctl start SplunkForwarder.service      
Hi!   I have a fallowing table: SESSION_ID SUBMITTED_FROM STAGE 1   submit 1 startPage someStage1 2   submit 2 page1 someStage1 2 page2 someStage2 How could ... See more...
Hi!   I have a fallowing table: SESSION_ID SUBMITTED_FROM STAGE 1   submit 1 startPage someStage1 2   submit 2 page1 someStage1 2 page2 someStage2 How could I count the number of SESSION_IDs that has SUBMITTED_FROM=startPage and STAGE=submit? So looking at the above table the outcome of that logic should be 2 SESSION_IDs