All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi there, One of my colleague having admin access has created a dashboard for audit to know that who logged into Splunk and how many times does the user login into Splunk for the last 7 days for a... See more...
Hi there, One of my colleague having admin access has created a dashboard for audit to know that who logged into Splunk and how many times does the user login into Splunk for the last 7 days for all the users. One of the users left the organization in January and we deleted the account with admin login and transferred all the knowledge objects to the other user also but Now we are seeing his name in the dashboard and alerts were triggering on his name also. We have again checked the user list but his name was not available, but we are still seeing his name in the alerts and dashboard. Can anyone help me with it…  the search query used for creating the dashboard is   index=_internal sourcetype=splunkd_access  | timechart span=6h count by user   The Raw Event displaying while searching the query is: 127.0.0.1 - name of the user* [28/Mar/2022:05:28:17.505 +0000] "POST /servicesNS/nobody/search/saved/searches/Single%20User%20Failed%20Attempt/notify?trigger.condition_state=1 HTTP/1.1" 200 1933 "-" "Splunk/8.1.0 (Linux 4.15.0-1023-azure; arch=x86_64)" - 2ms   Please help me to resolve it and thanks in advance......
Hi Team,    I have indexed the file as current timestamp but would like to execute the query by taking the filename timestamp as _time will that be possible now? if yes, how do we do that.
Hi Experts When using the following eval, I would like to declare a variable in macro as in create_var(3). | eval var_1 = if(isnull(var_1),"", var_1) , var_2 = if(isnull(var_2),"", var_2), var_3 ... See more...
Hi Experts When using the following eval, I would like to declare a variable in macro as in create_var(3). | eval var_1 = if(isnull(var_1),"", var_1) , var_2 = if(isnull(var_2),"", var_2), var_3 = if(isnull(var_3),"", var_3)  In some cases, we want to use MACRO because we need to define more than 30 variables. I am thinking that I can use foreach or map in the macro, but I am not sure how to do it. Any advice you could give me would be greatly appreciated!
Hi, folks. We were trying to ingest data from Openshift using Splunk Connect for Kubernetes. So far we have succeeded in ingesting from the whole infrastructure. So we are wondering if we can limit ... See more...
Hi, folks. We were trying to ingest data from Openshift using Splunk Connect for Kubernetes. So far we have succeeded in ingesting from the whole infrastructure. So we are wondering if we can limit from which specific namespaces we are going to ingest data from. As far as I understood from this link (https://github.com/splunk/splunk-connect-for-kubernetes#managing-sck-log-ingestion-by-using-annotations), it tells us how to route data per namespaces to specific indexes but still collecting from all namespaces. Our goal is currently to collect only from certain namespaces, e.g. just from 2 namespaces instead of all of them.  Is it possible to do with the add-on? Did I miss some detailed explanations? Please advise.
Hi, I am looking for various types of sample logs  dump similar to tutorialsdata.zip for exploring splunk search options.  Appreciate your help.   Best Regards, Anna
hi, can anyone help me how should I query the counts of kafka_datatype  of those stream_type which Im going to set an alert if there will be no increase within  3 hours within only a time period o... See more...
hi, can anyone help me how should I query the counts of kafka_datatype  of those stream_type which Im going to set an alert if there will be no increase within  3 hours within only a time period of time 7am-8pm  I have this query :   index="pcg_p4_datataservices_prod" sourcetype="be:monitoring-services" | setfields a=a | rex "^[^\|\n]*\|\s+(?P<kafka_datatype>\w+)\s+\-\s+(?P<kafka_count>.+)" | search kafka_datatype IN (PRODUCED, CONSUMED) | search stream_type IN (Datascore_Compress, Datascore_Decompress, Eservices_Eload, Eservices_Ebills) | eval service_details=stream_type | timechart span=3h limit=0 sum(kafka_count) by service_details   I tried to add in the query for the earliest/latest (as -180m I guess for 3hrs) but not showing any result that Im aiming to set it  to detect that if for 3 hours if no increase in the count of kafka_datatype of those stream_type which Im going to set the alert:   index="pcg_p4_datataservices_prod" sourcetype="be:monitoring-services" earliest=-180m latest=now | setfields a=a | rex "^[^\|\n]*\|\s+(?P<kafka_datatype>\w+)\s+\-\s+(?P<kafka_count>.+)" | search kafka_datatype IN (PRODUCED, CONSUMED) | search stream_type IN (Datascore_Compress, Datascore_Decompress, Eservices_Eload, Eservices_Ebills) | eval service_details=stream_type | timechart span=3h limit=0 sum(kafka_count) by service_details  
I've been trying to set up Splunk Security Essentials but keep running into Javascript errors and other odd behaviour. When I run the Automated data introspection in "Step One: CIM Searches" I always... See more...
I've been trying to set up Splunk Security Essentials but keep running into Javascript errors and other odd behaviour. When I run the Automated data introspection in "Step One: CIM Searches" I always get 42 completed searches and the remaining 22 searches complete but are never marked as completed (clicking the link to the search brings up the results I'd expect to see). In the browser Dev Tools Console it shows an error that window.updateOrMergeProducts is not a function and this seems to match up with the searches that are never marked as being completed. I've also noticed that it's getting a 400 Bad Request when doing a POST request to __raw/servicesNS/<username>/Splunk_Security_Essentials/search/jobs. Checking these I can see that they are all for searches like | from datamodel:Identity_Management.All_Assets | head 300000| stats count and the error that comes from searching is that the data model doesn't exist. I'm unsure if this is because something went wrong with the installation or if it's because the inventorying hasn't been completed yet. Unfortunately I also get errors when trying to configure the Data Inventory manually where I can't attach a product to 2 categories - e.g. successful authentications & failed authentications. I've tried resetting several times without any progress. I'm running Splunk Enterprise 8.2.5 and Splunk Security Essentials 3.5.0. Has anyone come across this behaviour before?
Hi, I have a parent panel which has below table panel Function Name Success Failure SLA greet 34 5 13.5 NGA 43 0 67.5 Customer 54 1 45 this has two drilldown panel the ... See more...
Hi, I have a parent panel which has below table panel Function Name Success Failure SLA greet 34 5 13.5 NGA 43 0 67.5 Customer 54 1 45 this has two drilldown panel the 1st drilldown panel should appear if we click the column value failure. the 2nd drilldown panel should appear if we click the column value SLA. Thanks in advance,  
Hello I use an input text token in my search like this town=$town$ By defaut, town = * The problem is that sometimes the field town doesnt exist in my events When i chose * i would be able ... See more...
Hello I use an input text token in my search like this town=$town$ By defaut, town = * The problem is that sometimes the field town doesnt exist in my events When i chose * i would be able to retrieve this kind of évents? Is it possible ? Thanks
I want a if else condition in which i need to pass address(path) . Suppose: If (condition==something) {Go to this path(ravi/go/bin.log)} Else if(condition==something) {go to this path(ravi/pyt... See more...
I want a if else condition in which i need to pass address(path) . Suppose: If (condition==something) {Go to this path(ravi/go/bin.log)} Else if(condition==something) {go to this path(ravi/python/bin.log)} Please help me with this. How can we do that.      
How to convert below query where summarization status is unknown . | index="netsec_firewall" sourcetype="pan:traffic" action="allowed" app:technology="client-server" | stats first(start_time) AS ... See more...
How to convert below query where summarization status is unknown . | index="netsec_firewall" sourcetype="pan:traffic" action="allowed" app:technology="client-server" | stats first(start_time) AS start_time count by app user src_ip src_host dest_ip dest_host dest_port duration
We have upgraded our NIPS and the management tool has a different IP address than the old one. The NIPS is sending data to our syslog server and putting it in our unassigned folder. I edited the sysl... See more...
We have upgraded our NIPS and the management tool has a different IP address than the old one. The NIPS is sending data to our syslog server and putting it in our unassigned folder. I edited the syslog filters to put any messages from the new IP address in the NIPS folder for monitoring. I have restarted the syslog-ng service and the Splunk service. I have confirmed the logs are being written to the NIPS folder on the syslog server now, but Splunk is still importing them to the unassigned index instead of the NIPS index and showing the source as the old folder. I have also confirmed that no new entries have been put in the old folder since the change. The inputs.conf looks like this. Where else can I look that might be causing the data to not go into the NIPS index?     [monitor:///app01/syslog/data/siem01/nips/.../*messages] _rcvbuf = 1572864 disabled = false host = siem01 host_segment = 6 index = nips sourcetype = mcafee:ips [monitor:///app01/syslog/data/siem01/unassigned/.../*messages] _rcvbuf = 1572864 disabled = false host = siem01 host_segment = 6 index = unassigned sourcetype = syslog      
I'm having issues with downloading , after i press download it takes me to the Splunk Software License Agreement page without downloading anything and there is no check box or any button to continue ... See more...
I'm having issues with downloading , after i press download it takes me to the Splunk Software License Agreement page without downloading anything and there is no check box or any button to continue the process on the page. 
If I want to use a field(alarm_time) from the main search as a search criteria for a sub-search, what code should I write? In the following code, I want to search for the time they are working  I... See more...
If I want to use a field(alarm_time) from the main search as a search criteria for a sub-search, what code should I write? In the following code, I want to search for the time they are working  I want to search Conditions : work_start < alarm_time < work_end  search results you want to get : (work_name=work_b) ____________________________________ | makeresults |eval _raw="alarm_time,host,message 2022/03/26 18:05,test_node,test_down" | multikv forceheader=1 | eval alarm_time_strp = strptime(alarm_time,"%Y/%m/%d %H:%M") | join type=left host [| makeresults |eval _raw="host,work_start,work_end,work_name test_node,2022/03/26 17:00,2022/03/26 18:00,work_a test_node,22022/03/26 18:00,2022/03/26 19:00,work_b test_node,2022/03/26 19:00,2022/03/26 20:00,work_c" | multikv forceheader=1 | eval work_start_strp = strptime(work_start,"%Y/%m/%d %H:%M") | eval work_end_strp = strptime(work_end,"%Y/%m/%d %H:%M") ]
I have a data set from where I am trying to apply the group by function on multiple columns. I tried stats with list and ended up with this output. country state time #travel Indi... See more...
I have a data set from where I am trying to apply the group by function on multiple columns. I tried stats with list and ended up with this output. country state time #travel India Bangalore 20220326023652 1     20220326023652 1     20220327023321 1     20220327023321 1     20220327023321 1 Whereas I am looking for something below ... country state time #travel India Bangalore 20220326023652 2     20220327023321 3   Any suggestions on the right query please!
Hello, I am trying to setup a search where we look for single source IP's hitting multiple destination IP's on our firewall. 1. When I do a search, I get TONS of results for destinations, but I w... See more...
Hello, I am trying to setup a search where we look for single source IP's hitting multiple destination IP's on our firewall. 1. When I do a search, I get TONS of results for destinations, but I want to limit the destination results to only show a few sample set. 2. I also have results showing up which only show one destination IP, which we do not want.   The search I am using as an example is    index=pan_logs eventtype=pan_traffic dvc="FD0*.*" action=allow OR action=allowed OR action=alert app=sip OR dest_port=5060 OR dest_port=5061 AND src_ip!=10.0.0.0/8 AND src_ip!=172.16.0.0/12 AND src_ip!=192.168.0.0/16  | stats values(rule) values(dest_ip) values(dest_port) count by src_ip vendor_action app dvc vsys | sort bt count desc limit=10 | sort dest_ip | where count > 500 | fields src_ip dvc vsys values(rule) app values(dest_ip) values(dest_port) vendor_action count | rename src_ip AS "Source IP", vendor_action AS "Action", values(rule) AS "Firewall Rule", values(dest_ip) AS "Target IP", values(dest_port) AS "Destination Port", count AS "Total Count", dvc AS "Device", app AS "Application" | head 20     Example search result
Using the Splunk addon for AWS to collect ec2 instance metadata I get an array called tags with key/value pairs such as below. What I want to do is extract the cluster name as a distinct var so that ... See more...
Using the Splunk addon for AWS to collect ec2 instance metadata I get an array called tags with key/value pairs such as below. What I want to do is extract the cluster name as a distinct var so that I can search on it or even better aggregate on it. Thoughts?        { [-]        Key: hostname        Value: elasticsearch001      }      { [-]        Key: cluster        Value: systemlogs
Hi there, hoping this is a quick question: I've got a search which polls for several eventlog types, and I want to put them into a table by event type using number of hosts in each event type, rath... See more...
Hi there, hoping this is a quick question: I've got a search which polls for several eventlog types, and I want to put them into a table by event type using number of hosts in each event type, rather than just the total of events per type. Right now it looks something like: (searchForA=A) (searchForB=B) (searchForC=C) (searchForD=D)  | eval EventType=case( match(searchForA, "A"), "Results of A", match(searchForB, "B"), "Results of B", match(searchForC, "C"), "Results of C", match(searchForD, "D"), "Results of D") stats count by EventType This shows me the counts of each event type no problem, and it works to show really big numbers, but what I'd like to show is a count of hosts per event type.... so like... | stats count by host PER EventType. Any help would be great!
Hi, I have a simple stats      | stats values(Field1) sum(TB) by Field2 date_month     This gives me one row for each month.  Field1 10 Field2 Jan Field1 15 Field2 Feb I want to ... See more...
Hi, I have a simple stats      | stats values(Field1) sum(TB) by Field2 date_month     This gives me one row for each month.  Field1 10 Field2 Jan Field1 15 Field2 Feb I want to see it like below so each month is on the same row grouped by the fields.  Field1 Field2 Jan 10 Feb 15 Tried transpose and some other suggestions. I just keep missing.  Thanks, Chris
Hi! I have unstructured log in the following format, and I can't seem to figure out how I can count the number of occurrences for each key in "keys"   log: I, [2022-03-25T18:29:43.325002 #55] IN... See more...
Hi! I have unstructured log in the following format, and I can't seem to figure out how I can count the number of occurrences for each key in "keys"   log: I, [2022-03-25T18:29:43.325002 #55] INFO -- : {:entry=>[{:op=>"operation1", :keys=>["key:my_key5, size:6309"]}]} log: I, [2022-03-25T18:29:43.324043 #56] INFO -- : {:entry=>[{:op=>"operation2", :keys=>["key:my_key6, size:159", "key:my_key5, size:6309", "key:my_key7, size:151", "key:my_key8, size:132"]}]} log: I, [2022-03-25T18:29:43.322759 #57] INFO -- : {:entry=>[{:op=>"operation3", :keys=>["key:smy_key9, size:4"]}]} log: I, [2022-03-25T18:29:43.317421 #58] INFO -- : {:entry=>[{:op=>"operation3", :keys=>["key:my_key6, size:159"]}]} log: I, [2022-03-25T18:29:43.311789 #55] INFO -- : {:entry=>[{:op=>"operation1", :keys=>["key:7, size:151"]}]}    What I'm trying to get is the count of each key in "keys[]". For example, the above would yield the following result:     my_key5 2 my_key6 2 my_key7 1 my_key8 1 my_key9 1     Ideally I can display the "size" of each key as well, like a table or something. But that might be too complicated.   What I have so far is only a query that can count the number of occurrences for each operation:   | rex field=log "op=>\"(?<operation>\w*)\"" | stats count by operation    but not sure how I can count the unique keys inside the array.